question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: What prediction is used to calculate the prediction bias for a dynamics Infernetial Quality Controller?
Solution: A prediction bias for a steady state IQ is the difference between the lab sample and the average of the predictions starting from the sample time minus the offset and history interval. For dynamics IQ, the prediction bias is the difference between the analyzer value and the next cycle prediction. There is no offset nor history interval settings for the dynamics IQ. Keywords: dynamics IQ prediction bias References: None
Problem Statement: How large can the file event.dat grow while buffering data when an Aspen InfoPlus.21 repository is paused or stopped?
Solution: The Standard C runtime I/O library only supports two GByte files. The file pointer used in the historian and passed to these I/O calls is a signed 32 bit integer. This gives it a range from -2,147,483,648 to 2,147,483,647, or two GBytes. Once the file reaches two GBytes, Aspen InfoPlus.21 discards new data. KeyWords: event.dat archiver max maximum maximum size max size largest size Keywords: None References: None
Problem Statement: When trying to test a configuration, Aspen Batch and Event Extractor fails with the following error message: Error with custom load table object to AtExtractorAPI_B21.AtXTBL_B21 -> B21_LoadDesignator: B21BAI-60244: Invalid Data source name
Solution: In order to be able to run the Extractor Configuration Test Tool, user needs to create in the Aspen Production Record Manager Administrator a batch data source called "TEST_DATASOURCE" and an area called "TEST_B21_AREA". Keywords: None References: None
Problem Statement: This knowledge base article discusses whether or not the Gas Compressibility Factor for Products defined as Type LPG can realistically use the default value for the Gas Compressibility Factor (Z), for performing vapor phase calculations in AORA (Aspen Operations Reconciliation and Accounting) Model Reconciliations.
Solution: When a Product in your AORA Model is defined to be of Type LPG, then the amount of gas material present in the vapor space of the Pressurized Tank will be calculated and its corresponding Vapor Mass then added to the Total Mass of the LPG Liquid, which was calculated from the "Corrected" (Net Standard) Liquid Inventory Volume at the Base Temperature for your model. A.) The "Compressibility Factor" of the LPG Gas is used in the vapor space calculation noted above to determine the Mass of the LPG material present in the vapor phase. B.) The "Compressibility Factor" value is computed from and represents the ratio of the actual volume of gas at a given temperature and pressure to the volume of gas that is instead calculated by the ideal gas law at the same temperature and pressure (without any consideration of the compressibility factor). ** Therefore, the value of the "Compressibility Factor" depends on the temperature, pressure and composition of the gas. As long as these variables do not substantially change you should be able to use the default value for the LPG Compressibility in your model. NOTE: Most customers typically do not import Gas Compressibility Factors into their AORA Models as new readings each day. Thus, if a Gas Compressibility Reading is not available for a given model operating date, then AORA will use the default value, or the simulated value for the Gas Compressibility Factor in the LPG Vapor Space calculations when calculating the Vapor Mass. Keywords: Constant Gas Compressibility Factor LPG Vapor Z References: None
Problem Statement: This knowledge base article explains why the following error can be returned when a user attempts to use the Aspen Operations Reconciliation and Accounting (AORA) Excel Add-In to Import Tank Inventory Readings or Pipe Flow Meter Readings or Lab Instrument Readings. UOM Incorrect Type or Not Found in Database
Solution: The Error Message mentioned above is generated because the Unit of Measure (UOM) specified for the Tank Inventory, Pipe Flow Meter, and/or Lab Instrument Readings in the AORA Advisor Excel Add-In does Not MATCH a UOM Type defined in Aspen Advisor or the UOM Type is incorrect for the UOM. To Check the UOM Type: 1. Open and login to your AORA Model Database using the Main Application GUI (Advisor.exe). 2. Navigate to Configure | Global | Units of Measure... 3. Double click the Unit of Measure in question then go to the Details Tab. 4. Ensure that the correct Unit of Measure Type is specified as shown below: NOTE: A common mistake when Creating New UOMs is to specify a Gas Volume or Gas Density UOM Type for UOMs which pertain to Liquids, and vice-versa. The following procedure can be used as a further test to indicate whether or not the UOM is configured correctly. 1. Open and login to the AORA Model Database using the AORA Excel Add-In (Advisor.xla). 2. Click the General Interface | Read Model Information button. 3. Select to Read UOM Configuration information. ** Only the UOMs which have a UOM Type defined will be returned. Keywords: Configuration Excel Add-In Not Found in Database Modify UOM Type References: None
Problem Statement: When accessing a Microsoft Access database located on a remote node, a query returns the error: Failed to connect to link ODBCLinkName: [Microsoft][ODBC Microsoft Access Driver] ‘(unknown)’ is not a valid path. Make sure that the path name is spelled correctly and that you are connected to the server on which the file resides. This article explains how to prevent this error.
Solution: Aspen SQLplus expects fully qualified UNC path names pointing to Microsoft Access databases on remote nodes; however, Microsoft’s ODBC Administrator requires you to create a network drive to reference the database. Use the Microsoft ODBC 32-bit Data Source Administrator to create an ODBC data source name pointing to the remote Microsoft Access database. After choosing the System DSN tab, press the Add… button and select Microsoft Access Driver (*.mdb) followed by Finish. Next supply a Data Source Name and press the Select… button. Click Network… to browse to the remote Microsoft Access database. Here, the Microsoft ODBC Data Source Administrator forces you to select a network drive that will be mapped to the UNC path pointing to the Microsoft Access database. Now, open the Aspen SQLPlus query writer and press the database wizard button. Press Add Link to point the the ODBC Data Source Name referencing the remote Microsoft Access base. When you test the link after saving the reference, you may see the following error: To solve the problem, open Regedit from a command window and find the key HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBC.INI\ODBCDataSourceName where ODBCDataSourceName is the name of the ODBC data source name. Change DBQ to the fully qualified UNC reference. Now, you should be able to reference the tables in the remote Microsoft Access database using the data links wizard in the Aspen SQLplus query writer. Keywords: Remote database Unknown path ODBC References: None
Problem Statement: When a configuration is scheduled to extract data between a Start and End Time using the Batch Extractor program with an ORACLE database, the time stamp format used in Aspen Extractor does not work in the ORACLE database. In the Aspen Extractor program, when the default time stamp format is used the query is created with no quotes surrounding the date times, causing an SQL error. SQL Error - ORA-00933 SQL command not properly ended You tried to execute an SQL statement with an inappropriate clause. SQL Error - ORA-01843: not a valid month A date specified an invalid month.
Solution: The default ISO date format "yyyymmdd hh:nn:ss" works with MS SQL Server, but for Oracle, the data type must be specified. The "Query Format" string to use in the "Extraction Server Properties" dialog when working with an ORACLE database is: TI\ME\STA\MP ''yyyy-mm-dd hh:nn:ss'' The backslash (\) before M indicates to the Extractor to use it as a literal, versus replacing it with the value of the month from the datetime in the query. To configure the correct data type format, in the Batch Extractor program go to the Admin menu and select - Configure Source Batch Extraction Servers. Fill in the fields for New Server and Source DB Type, then click on the Properties button and enter TI\ME\STA\MP ''yyyy-mm-dd hh:nn:ss'' in the Query Format field Click OK to save your changes and go back to schedule a job to extract data. Keywords: batch extractor configuration oracle schedule ORA-00933 ORA-01843 References: None
Problem Statement: As delivered, you cannot enable alarm alerting in aspenONE Process Explorer for tags defined by IP_PVDef. This article explains how to enable alerts for records defined by IP_PVDef. When adding alerts for records defined by IP_PVDef, you may see the error: refresh exception:Exception caught: Index: 0, Size: 0
Solution: Attached to this article is a query named EnablePVDefAlerts. This query updates the mapping record IPX_PV_TREND_MAP to facilitate alerting, explicitly points the mapping record field in the record IP_PVDef to IPX_PV_TREND_MAP, and changes the detailed display record point field in the record IP_PVDef from IP_PVDet to blanks to allow aspenONE Process Explorer to build a detailed display for records defined by IP_PVDef. Backup the Aspen InfoPlus.21 database, and then execute this query. Finally, stop Aspen InfoPlus.21 and reboot the Aspen InfoPlus.21 server. At this point, you should be able to enable alerts in aspenONE Process Explorer for tags defined by IP_PVDef. Keywords: alerts alerting IP_PVDef refresh exception Exception caught References: None
Problem Statement: This Knowledge Base article provides troubleshooting steps to resolve the following error: “Error: Authentication failed on the remote side (the stream might still be available for additional authentication attempts)” which may be encountered on some client computers when trying to use the Process Data Excel Add-in for Aspen InfoPlus.21 (IP.21).
Solution: Please try the following steps: · Check if Aspen Process Explorer (PE) is able to show trend data. PE has the same architecture as Excel Add-in for fetching values from IP.21 database so if it works while Excel Add-in doesn’t, it means that the problem could be related to an Aspen Data Source Directory (ADSA) component used only by the Excel Add-in. This component is the Aspen Process Data Service. · The Aspen Process Data Service uses Windows Communication Foundation (WCF) Framework to send and receive data which requires proper user authentication across different domains. Based on the error message, the client user account may not be initially authenticated by the server machine. · The Aspen Process Data Service component is only used by the PD COM Excel Add-in and it can be safety removed from ADSA without affecting the functionality of other components (including the COM Excel Add-in) which will do their work using the DA component instead. The DA connection to the server does not require authentication. · There are two possible locations where the Aspen Process Data Service can be found: the ADSA on the IP.21 server or the client User Data Sources ADSA on the client machine. Once you’ve located it, please remove it and retest the client computers that failed. · If it turns out that the PD service is not present in the ADSA, please compare a client machine that fails to one that doesn’t. Make sure both are members of the same domain and the user accounts are similarly configured in the domain as far as their permissions are concerned. · If none of the suggestions provided above helps, you may need to remove from the domain the client computers that have the authentication issue and re-add them back in the domain to correct the problem. Keywords: References: None
Problem Statement: This knowledge base article explains how to compress a query contained in a text file into a record defined by CompQueryDef, ProcedureDef, or ViewDef from a command window or another query.
Solution: The program IQ located in the Aspen InfoPlus.21 code folder allows you to activate a query from a command window. IQ.exe also allows you to load a query contained in a text file into a compressed query record defined by CompQueryDef, ProcedureDef, or ViewDef. The command line syntax is IQ -load_record QueryRecordName TextFileName where QueryRecordName is the name of a record defined by CompQueryDef, ProcedureDef, or ViewDef and TextFileName is the text file containing the query to be compressed. Note: IQ does not create QueryRecordName. QueryRecordName must already be defined in the Aspen InfoPlus.21 database. As an example, suppose you want to save the contents of C:\temp\DeleteBlankOccsFromATransferRecord.txt to the CompQueryDef record DelBlankOccs. Then you could open a command window, navigate to c:\temp, and issue the command C:\temp>"C:\Program Files\AspenTech\InfoPlus.21\db21\code\iq" -load_record DelBlankOccs DeleteBlankOccsFromATransferRecord.txt This command would compress the source stored in DeleteBlankOccsFromATransferRecord.txt into the record DelBlankOccs and create the file DelBlankOccs.sql in ….\group200\sql. You can also use the system command from a query as in the following example: macro iqtask = 'C:\"Program Files"\AspenTech\InfoPlus.21\db21\code\iq'; macro filename = 'C:\temp\DeleteBlankOccsFromATransferRecord.txt'; system '&iqtask -load_record DelBlankOccs &filename'; Keywords: Compressed source Compress source Command window CompQueryDef ProcedureDef ViewDef Load record References: None
Problem Statement: What functions are available for the Aspen Process Data Addin?
Solution: The Process Data Add-in allows you to construct data functions that retrieve and compute data using Aspen-provided formulas and provide the results to an Excel worksheet. The Process Data Add-in can use data from Aspen InfoPlus.21, ODM, OPC-DA, OPC-HDA, PHD, PI, and RDBMS databases. The Aspen Process Excel Add-In Help in section Process Data Addin functions contains a complete listing of the Aspen Process Data Addin functions and their arguments. The Process Data Add-in contains the following data functions: GetCurrentValue ShowCurrentValue GetHistoricalValue ShowHistoricalValue GetHistoricalValues ShowHistoricalValues GetCalculationValue ShowCalculationValue GetCalculationValues ShowCalculationValues SQLplusQuery ShowSQLplusQuery Keywords: References: None
Problem Statement: How do I resolve the error "Failed to connect to servername. Automation error. No more threads can be created in the system" when connecting to an Aspen Calc server from an Aspen Calc client?
Solution: Try restarting the AspenTech Calculator Engine service on the Aspen Calc server. Keywords: Automation error No more threads can be created Failed to connect AspenTech Calculator Engine References: None
Problem Statement: Why does the program tsk_server take so much CPU time when monitoring processes using the Microsoft Windows Task Manager?
Solution: The program tsk_server is the process used by the Aspen InfoPlus.21 Task Service. One of the things done by tsk_server is periodically checking that all of the Aspen InfoPlus.21 processes are running and restarting any missing process if necessary. The list of active processes is contained in the registry, which means that tsk_server must access the Windows registry. Normally, this task takes almost no time; however, anti-virus checking software running on the Aspen InfoPlus.21 server may interfere with this activity by examining each registry access, causing tsk_server to consume a lot of CPU time and seriously degrading system performance. To work around this problem, place the executable program tsk_server into the exclusion list of the anti-virus checking software. Keywords: InfoPlus.21 task service slow Windows Task Manager Symantec Endpoint References: None
Problem Statement: TSK_HBAK will only back up history files that are marked as Shifted or Changed, and Aspen InfoPlus.21 will not allow you to select which history file sets to back up. How do I back up history files that are not marked as Shifted or Changed, and how can I select the file sets to back up?
Solution: The query BackupFileSets attached to this solution backs up filesets in a manner similar to TSK_HBAK. There must be a record defined by historybackupdef that is normally used to backup a repository. This record contains the backup path for changed file sets, and the query uses this path as the backup path destination folder. The query prompts for a repository name along with starting and ending file set numbers to backup. The next prompt is for the record defined by historybackupdef used by TSK_HBAK to backup the repository. After receiving this information, the query pauses the repository. After pausing the repository, the query fetches the destination root folder from the column "Changed Location" in the historybackupdef record used by TSK_HBAK to backup up the repository. Then, for each file set to be backed up, the query creates a folder named SQLplus_reposname_fsn_startdatetime_enddatetime. For example: SQLPlus_TSK_DHIS_3_13-02-08-142700_13-04-09-101720 means file set 3 from repository TSK_DHIS with a starting time of February 8, 2013, 14:27:00 and an ending time of March 9, 2013, 10:17:20. This is the same type of naming convention used by TSK_HBAK. After creating the desination folder, the query copies the contents of the file set to the destination folder using the command COPY /Y /B. After completing the copies, the query resumes the repository. If you have included the securable object SQLplus in your local security configuration, then the user running the query must have System privilege. If you have secured the Aspen InfoPlus.21 database, then the user running the query must have Admin privilege. Keywords: TSK_HBAK backup file set fileset References: None
Problem Statement: This Knowledge Base article provides a list of events that are recorded by Aspen Audit & Compliance Manager (AACM).
Solution: The Aspen Audit and Compliance Manager (AACM) Administrator is an MMC console application that allows you to monitor and manage the Audit & Compliance Server, including the transmittal of event message data to the secure Audit & Compliance database. It also provides administrative functions for administering the Audit & Compliance Server, message processing, and logging. For example: · Establishing and testing the connection to the database · Starting and stopping event processing · Server process monitoring · Logging · Backup, archival, restoration, and purging of the database Below is a list of Event-Generating applications and the types of Event Messages generated by them: Application Type of Event Messages Generated Aspen Framework (AFW) System security and Aspen Audit & Compliance Manager (AACM) access. Aspen Production Record Manager (APRM) Batch processing and system events. System data includes PRM security violations, configuration changes, and batch process data changes. Aspen InfoPlus.21 Tag and other system events. System data includes security violations, configuration changes, and Aspen InfoPlus.21 data changes. Aspen Production Execution Manager (APEM) Used to design, schedule, and execute processes, as well as track all executed related data, by generating standard or custom documentation, and constitutes a 21CFR11 compliant-enabled system. IP.21 Browser - based Aspen Audit & Compliance Manager The Aspen Audit & Compliance Manager provides the capability of manually adding event and comment messages. Note: AFW, APRM, APEM, and Aspen InfoPlus.21 automatically create event messages for transmission and storage in the Audit & Compliance database. The Enter Event Message pane allows the manual entry of events and comments. The Aspen Audit & Compliance Manager (AACM) provides a common focal point to facilitate and coordinate audit trail handling in a 21CFR11 processing environment. Basic features include: · Event collection. The Aspen Audit & Compliance Manager collects and stores event data that originates from several primary sources: Aspen Framework, Aspen InfoPlus.21, Aspen Production Execution Manager, Aspen Production Record Manager, and the Enter Event Message pane, while providing complete event audit trail capabilities. · Event delivery. Event messages have end-to-end guaranteed delivery from origination to a secure Oracle or MS SQL Server database. These messages are transmitted to data storage by the Audit & Compliance Server. · Event viewing and analysis. Event search and viewing options allow the display of events and their comments from multiple sources in a time-synchronized manner. It also provides for filtering by multiple criteria. Comments Multiple comments may be associated with a given event message. Both types of messages (Events and Comments) can be sorted and displayed using the Aspen Audit & Compliance Manager. Time Two times are maintained for each event: · the time an event actually occurred · the time that an event is stored in the database These times are maintained using Greenwich Mean Time (GMT) as the international time standard format. Times display in either GMT or the user’s local time. User and Computer Identification Messages are associated with a specific domain, computer, and user account. Reporting Event data may be easily transferred to other formats such as Microsoft Excel, PowerPoint, or Word for use in a non-21CFR11-compliant environment by copying an Event History or Event Detail report. Security Access permission to Aspen Audit & Compliance Manager data is provided by database security. Manual event and comment entry from the Aspen Audit & Compliance Manager is provided by AFW security. Audit Integration Audit integration between Aspen Production Execution Manager and AACM that allows for the movement of Production Execution Manager audit data and report functionalities to AACM. The AACM event queue engine monitors the event queue and writes the event to the AACM database. The AACM user can then query the audit database via the Web interface based on user-defined queries. The Web component queries the data and displays the formatted data results. Keywords: References: None
Problem Statement: In general , Aspen InfoPlus.21 assigns random unique port numbers to client applications; however, firewalls sitting between the client applications and Aspen InfoPlus.21 servers require the use of specific ports. This article describes how to designate specific ports to be used for Aspen InfoPlus.21 client applications.
Solution: Aspen Process Explorer and other client applications request information from a Remote Procedure Call (RPC) Server in the InfoPlus.21 database. Five RPC Servers are built into Aspen InfoPlus.21. They are separated into five external tasks (visible in the InfoPlus.21 Manager). The RPC server tasks are as follows. Task Name Associated Client Applications(s) TSK_ADMIN_SERVER InfoPlus.21 Administrator, InfoPlus.21 Definition Editor TSK_APEX_SERVER Aspen Process Explorer TSK_EXCEL_SERVER Aspen Process Data (Excel) Add-In TSK_ORIG_SERVER All pre-v3.0 clients TSK_DEFAULT_SERVER All clients not connecting to the other four servers Problems may arise when a client application attempts to access one of these RPC servers across a firewall. If a client attempts to access the server on a port blocked by the firewall, the connection will fail. To allow for communication, InfoPlus.21 includes an option to assign a specific port number for each RPC server tasks. If the firewall permits two-way communication through the specified ports, clients will be able to connect to the server. Besides these ports, also port 111 needs to be opened. This port is used by RPC for the initial communication. To assign the port number to the external tasks: 1. Open the InfoPlus.21 Manager. 2. Select the API task whose port you want to reassign. For Process Explorer, the task is TSK_APEX_SERVER. 3. In the command line parameters box, type -n1234 after the existing parameter. In this example, 1234 is the port number. The full command line parameter for TSK_APEX_SERVER will appear as -v3 -n1234 4. Click Update 5. Stop and restart the API server task for the change to take affect. NOTE: In addition to ports for each API server task, port 111 must be opened in the firewall. Port 111 is used for the initial API call. For additional information on connecting the InfoPlus.21 Administrator tool across a firewall, please see How do I make the Aspen InfoPlus.21 Administrator connect through a firewall? KeyWords: firewall connection port Keywords: None References: None
Problem Statement: An Aspen InfoPlus.21 point is a tag with a historical repeat area. By choosing the Record Utilization tab on the Aspen IP21 Administrator Properties window, you can see the number of licensed points used; however, the number of points defined against IP_AnalogDef, IP_DiscreteDef, IP_TextDef and similar records do not add up to the number of licensed points used. What other tags are counted as Aspen InfoPlus.21 points?
Solution: This query CountHistoryTags attached to this article finds all definition records with history repeat areas and counts the tags defined against those definition records. The total at the end should match the number of points the Aspen IP21 administrator says are defined. Keywords: Sample query Number of IP21 Points Number of historized tags CountHistoryTags References: None
Problem Statement: In all currently supported versions of Aspen MES software, the act of contacting the domain controller is handled by the MES clients (as opposed to the security server) by default. Typically this process can be handled with greater speed if the activity is handled by the client. However, the option to delegate the task of looking up domain groups to the security server is still possible by setting a registry key in AFW Tools. For more information on the UseServerADSI registry key, please see solution 110390. If you encounter security problems, and suspect they are related to the failure to gain access to the domain controller/Active Directory server, it may be necessary to change the account running the security server to one of greater permissions. This solution will help identify which user account is running the security server, and therefore, where to change the user account if necessary.
Solution: The installation of the Aspen Security Server (ALS or AFW) will create an IIS virtual directory named Aspentech underneath the default web site and an application pool named Aspen Security Pool. MES client machines are configured to contact this web application in order to verify AFW security. Determining the account that is identified as running the security application pool will vary according to the method of authentication assigned to the directory. By default, the Aspen Security Pool will be configured to use ApplicationPoolSecurity identity but in some cases it may be necessary to change this account to a domain based account. To verify, or change the identity of this app pool proceed as follows: 1. In the Internet Information Manager (IIS) Console, expand the Default Web Site and select the Application Pools 2. Right-click on the Aspen Security Pool and select Advanced Settings... 3. Select Identity row and click the "..." expansion button 4. Switch to Custom Account radio button and click "Set..." 5. Enter domain\username, password and conform password of a specific account you want to run this app pool. It is suggested to use the same account as the one running the Aspen InfoPlus.21 Task Service service and the AFW Security Client Service service. (NOTE: The account entered here must have permission to read user objects on the Active Directory domain.) 6. Restart the IIS Admin Service to implement the change. Now, check the permissions on the AspenTech directory under the Default Web Site by selecting it in the left pane and double-clicking on the Authentication object in the right pane. If "Windows Authentication" is enabled, the authenticated user will be the account connecting to the security server, such as the user account logged into the machine that Process Explorer is connecting from. If both Anonymous Authentication AND Windows Authentication are selected, the security server will be identified as the specified Anonymous Authentication account. The only occasion where it would use Windows Authentication instead would be if the operating system encountered a problem attempting to validate the anonymous user account. NOTE: Basic Authentication should never be selected in the Aspentech directory. Basic Authentication is intended to display a dialog box to the interactive user, asking for their domain account and password. MES applications interact with the security server in a non-GUI mode, therefore this dialog would never be seen, and the user account would never be validated. For more information on using MES products in conjunction with Active Directory, see solution 113224. For information on how the various MES clients verify security, please see the following knowledgebase articles: 113221 - How does IP.21 verify AFW security? 113222 - How does SQLPlus verify AFW security? 113223 - How does IP.21 Process Browser verify AFW security? 113231 - How does Process Explorer verify AFW security? 113224 - General information on Active Directory KeyWords access denied the users permission does not allow the operation active directory services adsi ad lookup Keywords: None References: None
Problem Statement: This knowledge base article contains a sample query to return sub-batch characteristics.
Solution: The AspenTech Chemicals Batch Simulation distributed with the Aspen InfoPlus.21 database defines a batch with three subbatches or phases. The query FetchSubbatchCharacteristics attached to this article loops through all the batches created in a specified timespan and reports the batch number, the starting and ending times of each batch, along with the names, units, and starting and ending times of each subbatch. Keywords: Query Sample Batch Subbatch Sub-batch Phase Characteristics References: None
Problem Statement: What functions are available for the Aspen Process Data Past Addin?
Solution: The Process Data Past Add-in allows you to construct data functions that retrieve and compute data using Aspen-provided formulas and provide the results to an Excel worksheet. The Process Data Past Add-in can use data from Aspen InfoPlus.21 databases. The Aspen Process Excel Add-In Help - Legacy in section Process Data Addin functions contains a complete listing of the Aspen Process Data Past Addin functions and their arguments. The Process Data Add-in contains the following data functions: ATGetCurrVal ATGetTimeVal ATGetAttrVal ATGetTrend ATGetAgg ATGetFltData ATUserEntry ATEntryRead ATEntryWrite RunTagBrowser RunTimeLine Keywords: References: None
Problem Statement: AspenTech released new, more user-friendly versions of the Aspen MES Excel Add-Ins taking advantage of the Microsoft 'Ribbon' technology in version 7.3. What else is needed to be able to use the new Add-Ins?
Solution: Starting with version 7.3. Aspen Tech also introduced: ? A new Aspen Data Source Architecture (ADSA) Service Component called AspenProcessDataService ? A new Windows Service also called AspenProcessDataService The ADSA data source used by the new MES Excel Add-In must include the ADSA Service Component called AspenProcessDataService. Therefore the ADSA server must be V7.3 or later. ADSA prompts the user for two parameters when adding the service component AspenProcessDataService. The first is the node name pointing to an Aspen InfoPlus.21 Server (V7.3 or later) where the Control Panel service called AspenProcessDataService is installed and running. The second parameter is a port number. The default port number is 52007 which needs access through any firewall. In order to use the new Aspen Process Explorer Excel Add-ins against an older Aspen InfoPlus.21 server, the client must point to a version 7.3 or later ADSA server in order to access the new ADSA component AspenProcessDataService. When configuring AspenProcessDataService, the node name must point to a version 7.3 or later Aspen InfoPlus.21 server running the Control-Panel service. However the ADSA components "Aspen DA for IP.21" and "Aspen SQLplus Service Component" would still point to the older pre-V7.3 Aspen InfoPlus.21 server. Keywords: Excel Add-Ins COM Ribbon FAILED: Could not connect to net.tcp://Servername:52007/PME/ProcessData/IDataService References: None
Problem Statement: How Theoretical heat of cracking is different from Apparent heat of cracking in FCC?
Solution: Theoretical heat of cracking: based on the kinetic lumps for the feed, the coke production and the temperatures specified for the various parts of the FCC Apparent heat of cracking: based on the measurement of temperatures, heat losses, or catalyst cooler duty, Any difference between the theoretical and apparent heat of cracking of 40 btu/lb or less is to be a perfect match. If it gets to be significantly more, then it is recommended that the user looks at all of the measurements to make sure they are correct. Keywords: None References: None
Problem Statement: How do I determine which tags are not being scanned correctly by Aspen Cim-IO?
Solution: The following Aspen SQLplus query selects all rows from transfer records defined by IoGetDef, IOLongTagGetDef, and IOLLTagGetDef where the IO_DATA_STATUS field is not good: select Name, IO_MAIN_TASK, OCCNUM, IO_TAGNAME width 40, cast("IO_VALUE_RECORD&&FLD" as record) as "IP21_TAGNAME", IO_DATA_STATUS from IOGETDEF where IO_DATA_STATUS <> 'Good' and IO_DATA_PROCESSING = 'ON' union select Name, IO_MAIN_TASK, OCCNUM, IO_TAGNAME width 40, cast("IO_VALUE_RECORD&&FLD" as record) as "IP21_TAGNAME", IO_DATA_STATUS from IOLONGTAGGETDEF where IO_DATA_STATUS <> 'Good' and IO_DATA_PROCESSING = 'ON' union select Name, IO_MAIN_TASK, OCCNUM, IO_TAGNAME width 40, cast("IO_VALUE_RECORD&&FLD" as record) as "IP21_TAGNAME", IO_DATA_STATUS from IOLLTAGGETDEF where IO_DATA_STATUS <> 'Good' and IO_DATA_PROCESSING = 'ON' order by IO_MAIN_TASK, NAME, "OCCNUM"; Keywords: union bad initial good status transfer References: None
Problem Statement: This Knowledge Base article shows how to resolve an error when trying to save any changes to a Process Explorer document. The error is: "Aspen Process Explorer: Failed to save document. An unexpected network error occurred." The above-mentioned error is received when a user, after working for several minutes modifying a Process Explorer workspace containing a few plots, tries to save the changes. (Note: All plot and workspace files are saved on a standard mapped network drive.)
Solution: Check the Action property of your GPO mapped drive to make sure it is set to Update, not to Replace (see screen capture below). Additional information regarding this setting can be found here: https://community.spiceworks.com/topic/1139165-windows-10-losing-mapped-drives?page=1 https://social.technet.microsoft.com/Forums/en-US/92fa82ff-9ee9-40e7-964a-7ff40cff0b20/gpp-mapped-drives-disappear?forum=winserverGP https://social.technet.microsoft.com/Forums/en-US/5b53cc2d-aac3-4c23-9bd5-6f9322428365/gpo-mapped-network-drives-disappearing-windows-10?forum=winserverGP Keywords: References: None
Problem Statement: A user is running a report that is using the Aspen Process Data Excel Add-in. The report is referencing 600 tags and it takes a few minutes to run. This Knowledge Base article provides recommendations to make the report run quicker.
Solution: Here is a list of steps that can be tried to improve the performance of the spreadsheet: · Start by installing the latest cumulative and emergency patches for Aspen Process Data on the client computer. Also, the Process Data on the Aspen InfoPlus.21 server(s) will need to be upgraded to the same patch level. · If using a Public Data Source in ADSA, make sure all data sources listed are valid. An invalid or non-existent data source may significantly slow down the search speed of the Add-in. · Use IP addresses instead of node names when configuring data sources in ADSA. · If there are multiple data sources configured in the ADSA server the client connects to, check to see if the same slowness can be detected when connecting to all the IP.21 servers in the ADSA. · Switch from the Public Data Source in ADSA on the client computer of the user in question to the User Data Source and only configure the data source(s) that are accessed by that user. · If record level security is implemented on the IP.21 server in addition to data base security, the Add-in search speed will be negatively impacted. You can improve the access time by removing record level security on the tags that don't really need it. · If there's a Windows firewall enabled on the client machine, you can safely disable it by disabling the Windows service. Also, a firewall between the IP.21 Server(s) and the clients will have a negative impact on Add-in performance. · Excel Add-in may also be slow due to a network issue. Process Explorer and Excel Add-In use reverse lookup to connect to an IP.21 server. Solution 107955 (http://support.aspentech.com/webteamasp/KB.asp?ID=107955) describes how to test reverse lookup. Reverse lookup must be enabled for the Add-in to work properly. · A faulty network card on the client machine or a slow network connection to the IP.21 server may also cause the Add-in to run slower than expected. · The spreadsheet will be slow if the user is trying to get too many values per tag at the same time. · Finally, please review the following KB articles linked to this KB: How can I improve the retrieval time in Excel Spreadsheets which use the Aspen Process Data Add-in? KB 112272. Microsoft Excel is slow to start/open when loading the Aspen Process Data Add-In. KB 117467. Keywords: Slow Performance Addin References: None
Problem Statement: This knowledge base article describes how to resolve the problem "Error Reading Historical Repeat Area: Disk History Read Error 11" when accessing history using a query.
Solution: This error does not indicate there is a problem with history file sets or accessing history. Error 11 is a Microsoft error code that means "An attempt was made to load a program with an incorrect format." If you see this error while executing an interactive query using the SQLplus query writer or through an application using an ODBC connection into InfoPlus.21, try restarting TSK_SQL_SERVER. If a query activated from a querydef or compquerydef record produces the error, try restarting the IQ task (TSK_IQn) responsible for running the query. Keywords: Error reading historical repeat area Disk history read error 11 error 11 References: None
Problem Statement: There are two file types (formats) for the VBA version of Aspen Excel Add-in: .xla and .xlam (AtData.xla and AtData.xlam). The XLA format was used with MS Excel 97-2003. The XLAM format is for Excel 2007 and later. AspenTech Installer no longer installs the XLA version. Both the XLA and XLAM Add-in are identical. To convert a workbook from XLA to XLAM, you can run the Find and Replace tool in MS Excel to replace all formulas referencing AtData.xla with AtData.xlam for each workbook.
Solution: For step-by-step procedure please download and view the attached PDF document entitled: How to convert workbooks from AtData xla to AtData xlam Keywords: convert spreadsheet References: None
Problem Statement: How can I verify that the tag records are updating when IOGetDef records are turned on?
Solution: This example will write out a list of IOGetDef records where some tags are not updating. select name, "IO_LAST_STATUS", "IO_LAST_UPDATE" from IOGetDef where ("IO_RECORD_PROCESSING" = 'ON' and "IO_DATA_PROCESSING" = 'ON') and ("IO_VALUE_RECORD&&FLD" -> "IP_INPUT_TIME" < getdbtime - 1800); Notes: --The -> is an SQLplus Indirection symbol. When it is used like in the example above it goes to the destination record and picks a designated field as opposed to simply returning the record and field name that are listed. For instance, if IO_VALUE_RECORD&FLD has "ATCAI IP_INPUT_VALUE" in it, instead of returning "ATCAI IP_INPUT_VALUE" SQLplus will actually go to the ATCAI record and look at the field after the -> symbol, which is IP_INPUT_TIME. --This example uses tag records from the Aspen InfoPlus.21 tag set (like IP_ANALOGDEF and IP_DISCRETEDEF) which store the time in a field called IP_INPUT_TIME. Other tag families may be used if that field name is adjusted. --The 1800 in the example above refers to 1800 tenths of seconds (or 180 seconds, or 3 minutes). --The two & (ampersands) in "IO_VALUE_RECORD&&FLD" is not a typographical error. The & has a special meaning in SQLplus but since it is part of the field name adding a second & cancels the & out. --Other transfer record definitions like IOLongTagGetDef and IOLLTagGetDef will work as well (if they are substituted in place of IOGetDef). (This article was previously published as solution 119041.) Keywords: References: None
Problem Statement: This Knowledge Base article answers the following question: What is the ATC_CALCS QueryDef record used for?
Solution: The ATC_CALCS QueryDef record is used to update all AspenTech Chemical (ATC) Demo tags. The query record has no other purpose and it should be disabled if the snapshot doesn’t contain demo tags. Keywords: References: None
Problem Statement: Microsoft’s msxml4.dll file located in the C:\Windows\SysWOW64 directory has a known vulnerability that has not been patched by Microsoft. This Knowledge Base article provides the answer to the following question: Which versions of Aspen Technology software use msxml4.dll file?
Solution: MSXML4.DLL is primarily used by the Aspen Production Record Manager (APRM) application and is required by all APRM versions prior to V10. Versions 10 and later use msxml6.dll file which does not have any known vulnerabilities. AspenTech strongly recommends upgrading your Aspen software to the most recent version (V10.1 at the time this KB article was written) that does not use this dll file. Keywords: References: None
Problem Statement: The Publish tab in the Aspen Process Graphics Editor V10.1 Options dialog contains the Destination Path that has two options: Local and Remote. This Knowledge Base article explains the difference between them.
Solution: The "Local" option publishes graphics to your local server using a local path or a remote server using a UNC file path, which is considered a "Local" operation. The "Remote" option was added in the Aspen Process Graphics Editor V10.1 for publishing graphics to the specified aspenONE Process Explorer server through the Aspen Process Data REST service, not a file path. Keywords: References: None
Problem Statement: An Overall Equipment Effectiveness (OEE) event is a span of time in which something relevant to OEE happened. Events cannot overlap each other in time. The Events table will return an error if the user attempts to write events that overlap with each other. If, through some other method, events are written to the record that do overlap, then there will be an error message on the aBar in aspenONE Process Explorer that there are overlapping events. The presence of overlapping events may wrongly affect the OEE calculation results and must be corrected. This Knowledge Base article provides the answer to the following question: How to delete overlapping events in OEE?
Solution: In order to delete overlapping events in an OEE record, please set the OEE_EVENT_QSTATUS to “Bad” for the events. Here is an example of a query which would change the status in the OEE_EVENT_QSTATUS field of the overlapping events from GOOD to BAD, which will make them be ignored by A1PE (effectively deleting them). This query is only an example and will need modification to meet the specific needs of your data. UPDATE DA3OEE SET OEE_EVENT_QSTATUS = 'BAD' WHERE OEE_EVENT_START BETWEEN '04-OCT-18 08:38:00' AND '04-OCT-18 10:40:00'; DA3OEE is the name of the OEEDef tag. Keywords: References: None
Problem Statement: What is the equation for Linear D86 Based method for flash point in Petroleum
Solution: Linear D86 Method is suggested by Andrew Newton in Holburn (Linear D86 Based) and it is like a linear correlation as below: FP = intercept+ coeff1*D86_ibp + coeff2*D86_5 Keywords: None References: None
Problem Statement: How do I prevent the error message pop up while the simulation case is solving? When the error message with pop up window appears the user is required to press OK to continue solving the simulation. This happens with the Adjust and Recycle blocks when the solver iterations exceeds the maximum iterations.
Solution: The user has the option to display the warnings and error messages in the Trace Window to prevent the need for user interaction and to allow the case to solve without interruption. To send the messages to Trace Window the user should follow the steps given below: 1. Select File and then Options 2. Select Simulation from the Options page and apply ticks to the Errors options as highlighted below. Keywords: Pop up Window, Trace Window, Error Message, Warning Message References: None
Problem Statement: How do I prevent the error B21BCN-54008 when importing an EVT file?
Solution: This issue can occur when the Memory files can become corrupt or the user does not have permission to open the file. To resolve this issue 1. Check that the user has access to the Journal folder created by the user and access to C:\Program Files\AspenTech\Batch.21\Data\BatchConnect 2. If this does not resolve the issue check the folders above for .MEM files and rename these files. For Example: rename OpenBatchScanner_n.mem to OpenBatchScanner_OLD and restart the Aspen Batch Connect for OpenBatch service. Keywords: Batch.21, APRM, Aspen Production Record Manager, OpenBatch, EVT, Mem, B21BCN-54008 References: None
Problem Statement: When you transform a vector, how do you know if it’s good to use in DMC3?
Solution: The ability of the model to predict the CV response is what makes a good controller. Below example is the prediction of a valve output for a flow controller. Notice that the prediction is not very good on the upper values. Valves usually exhibit a nonlinear behavior with the valve opening and the flow. In DMC3 builder go to scatter view and select the flow OP as Y (Transformed) and the PV as X (reference). Select the autofit using data to find the alfa value. Then transform the OP as a parabolic valve and re-run the case. Below are the results with the transformed vector, notice that the prediction is better in the upper region: DMC3 Builder provides the option to plot the Prediction vs Measurement in an X-Y plot. In the next image you can see how the prediction is closer to the 45 degree line with the transformed Output, thus giving a better prediction. Keywords: Transforms APC Builder References: None
Problem Statement: Do you need a Recycle operation for a closed loop simulation in Aspen HYSYS?
Solution: The answer to this question is NO. A Recycle operation is required when the downstream material streams mix with upstream materials. In a closed loop system the material does not change. For example, a typical Propane Refrigeration simulation does not require a Recycle operation. In the above simulation the composition has been specified in the material stream 1. The material will remain unchanged in all the streams. What will be the problem if someone decides to add a Recycle unit in the closed loop simulation as shown below? In this case, there are two streams (stream 3 and 3-Rec) connected to a Recycle operation. Stream 3 contains specified composition and some other conditions. If you change any of the inputs in stream 1, those will be inconsistent with materials in the Recycle block outlet stream (3-Rec). Since the Recycle operation solves the simulation by iteration, in the iterative steps the updated results in the output stream will be inconsistent with the materials in stream 1. This will result in an Inconsistency error in the simulation. Avoid using a Recycle operation in a closed loop simulation. Keywords: Recycle Operation, Closed Loop Simulation References: None
Problem Statement: This knowledge base article explains how to resolve the following error: "Failed to Connect Sink, the Dadvise Method Failed"
Solution: This error can occur, even though the Aspen Cim-IO test API utility can successfully retrieve values from the OPC server, if the IO_Frequency field is set to zero in the transfer record. Setting the IO_Frequency to a non-zero value can work around this error. Keywords: cimio_t_api Get Transfer References: None
Problem Statement: How do I access VIND and DEP in the transformed space?
Solution: When a transform is invoked for a variable, selected attributes are always transformed following the read from the PCS (Process Control System) and then anti-transformed before the write to the PCS. The internal parameter used by the engine for the transformed measurement is DEPA for dependent variable and VINDA for independent variables. We are going to review the demo application col5x3 with a log10 transform in AI-2020. When the the DEP value for AI-2020 is 3.06 and you apply a log10 transform X = log10(3.06) The value for the transformation is X= 0.48. RTE-Based applications You can add transforms at the case or master model. In this example we are going to use the master model. Open DMC3 Builder and go to the master model view, select transforms, then add the logarithm transform for AI-2020, check the option "Use log10" to use base 10 logarithm. Next, go to the Simulation view and do a step simulation. Click on the variable name and navigate to the measurement section. You will see there the current measurement DEP=3.06 and the transformed value DEPA = 0.48 When the controller is deployed online is worth noting that Aspen Watch will keep in the history the raw and the transformed value. This is important since the raw model prediction is in the transformed space. ACO-Based applications To access to the values in the transformed space in DMCplus Simulate perform a step simulation and then look at the internal variables under Controller > Internal variables. In order to see the actual values that the engine is using for the calculations we have to use the online software. Once you define the XFORM parameter in DMCplus build we have to go to simulate to see its effect and have to look for VINDA for the VIND value and DEPA for the DEP value. First in DMCplus Build, under the AI-2020 CV, look for the XFORM Parameter and select log10. Then go to DMCplus Simulate with the button and perform the step simulation. Look that the value is the same in the CV window, but if we go to internal variables the value for DEPA001 (since it's the first CV) is in the transformed space. Keywords: Transformed Space DMC3 Builder DMCplus Build DMCplus Simulate References: None
Problem Statement: Does the valve in Aspen HYSYS handle critical flow in dynamics?
Solution: The valve model in Aspen HYSYS handles critical flow in dynamic mode. This information is available in the Flow Limits page under the Dynamics tab as shown below. Note that for vapour flows, choking is handled automatically. For liquid flow, the option must be enabled. You can also enable the liquid choking on the Options page in the Integrator property view. For more information, please refer the online help page. In the newer Aspen HYSYS versions the online help topics have been improved. Keywords: Critical Flow, Chocking References: None
Problem Statement: Aspen Cim-IO has a feature called Store and Forward that prevents loss of data if the Aspen InfoPlus.21 system fails or the communication between the Cim-IO server and Aspen InfoPlus.21 fails. When creating a calculation it is possible to make Aspen Calc aware of the situation where the connection between the Cim-IO Server and the Aspen InfoPlus.21 client is interrupted (Stored and Forward) by ticking the box next to "Enable Store and Forward support" (see below). This Knowledge Base article provides Best Practices advice for configuring calculation with the Stored and Forward feature enabled.
Solution: Store-and-forward (S&F) is a feature of Aspen Calc that interacts with historical tag values in InfoPlus.21 (IP21) and with the S&F feature of Aspen Cim-IO. However, this feature can be used for any case where updates to InfoPlus.21 (IP21) tag values may arrive later than expected (that is, later than the scheduled time of a calculation in Aspen Calc). When Aspen Calc processes scheduled calculations with S&F enabled, it will handle cases where data values used as inputs to the calculations may arrive late. In such cases, Aspen Calc will skip the regularly scheduled calculation and will begin monitoring for the arrival of the data. Once it has been determined that all data values have arrived, then Aspen Calc will perform the calculation using the historical values. If multiple time periods were skipped, then it will process those periods, as well. To use the S&F feature, the calculation must meet the following criteria: 1. Store-and-forward must be enabled for the calculation or for Aspen Calc overall. 2. The calculation must have at least one input parameter that meets the following criteria: a. It must not use extrapolation. b. It must be bound to an IP21 record and field. c. The associated IP21 record and field must have history (has a history field). d. The associated IP21 record and field must be a data type that has a quality status. e. The associated IP21 record and field must have a valid timestamp (such as IP_INPUT_TIME). 3. The calculation must have at least one output parameter that meets the following criteria: a. It must be bound to an IP21 record and field. b. The associated IP21 record and field must have a valid timestamp (such as IP_INPUT_TIME). 4. The calculation must be placed in an Aspen Calc schedule group and the schedule group must be enabled. If there are no input parameters that meet the above criteria, then the calculation is processed normally, thus it will not wait for missing store-and-forward values. Aspen Calc will verify that all the qualifying input parameters have newer timestamps than the last time one of the qualifying output parameters was updated (let’s label this the LastUpdateTime). Therefore, none of the output parameters should be modified by any other mechanism excepting for Aspen Calc itself. In other words, Aspen Calc expects to be in control of the timestamp value for all qualifying output parameters. If Aspen Calc determines that the associated value for one or more input parameters has not been updated since the LastUpdateTime, then it will skip processing and try again on the next cycle. For details about how input parameters are processed, please see the section later in this article. When Aspen Calc discovers that all values are newer than the LastUpdateTime, then it determines what would be the next scheduled update time (let’s call this the NextScheduledTime) since the LastUpdateTime based on the following rules: a. If the group schedule is based on a monthly schedule, then it increments the LastUpdateTime by one month. b. If weekly, then it increments by seven days. c. Otherwise, it increments the last update time by the rate of the schedule group. If the current time is before the NextScheduledTime, then Aspen Calc will skip processing and will begin monitoring for new data each minute. Otherwise, if all the input parameter values are newer than the NextScheduledTime, then Aspen Calc will process the calculation using historical values. If some input values are not newer than the NextScheduledTime, then Aspen Calc will skip processing and will begin monitoring for new data each minute. Monitoring involves reading current IP21 timestamps for each of the qualifying input parameters each minute. Note: Since Aspen Calc waits until new input values have arrived after the schedule time, this ensures that all values have arrived for the time period. This is especially important so that calculations containing aggregates will give the proper, expected result. Here is how the history is processed. To review, Aspen Calc will calculate using historical values if it determines that the current time is past the NextScheduledTime and that all qualifying input parameters have values newer than the NextScheduledTime. If any are older, then it skips the current processing cycle. Aspen Calc will read historical values for each of the qualifying input parameters. We will refer to them as “samples”. It reads the samples based on the time interval of the schedule group. For instance, if Aspen Calc has been waiting for several periods, then it will read samples for each of those intervals. The values returned depend on whether the interpolate flag is set for the associated input parameter. If so, it uses the interpolated value at each time interval, otherwise it uses the last actual value from history. Aspen Calc will invoke the calculation formula for each time interval passing the acquired input values. Calculations that contain aggregate functions, (like TagAverage, etc.) might not use the supplied input values since it will re-read from history as part of the aggregate function call. Note: For calculations that contain aggregates, this will result in extra readings from IP21 history. First, Aspen Calc reads the history for each input tag to obtain begin and end time stamps for each input value sample. Then for each aggregate function used in the calculation itself, this results in another read of history for each time period. So, for example, if there are two input parameters, then history is read twice. Then if four sample periods were gathered, then history is read once for each aggregate function call in the calculation. So, suppose the calculation calls TagAverage and TagStatistics; this would require two history reads. Multiplying this by the four periods you get eight calls to history. Add the original two and you get a total of ten separate history reads. Input Parameter Processing Aspen Calc store-and-forward processing works by reading values from IP21 history for each input parameter that is bound to an IP21 tag. However, two cases exist whereby gaps could occur in the calculated results, such as: 1. When incoming values are pending in the IP21 history queue and not yet committed to storage. 2. When incoming values are excluded from IP21 history storage due to value compression. Many sites might never experience this case since it involves a mix of the following situations: Sites having many tags configured for store-and-forwarding. Sites having many store-and-forward calculations with short scheduling cycles. Sites where performance issues could cause the IP21 history queue to process more slowly. Sites having history compression enabled without a maximum time interval correlated to the calculation schedule. Regarding case 1, for versions prior to v8.7, Aspen Calc uses the fixed area timestamp, such as IP_INPUT_TIME, to determine when new data has arrived and thus the calculation should run. But there is a chance for gaps in the calculated results as explained earlier. For version v8.7 and higher, the algorithm was changed to use the last value stored in IP21 history to determine the arrival time of data for input parameters bound to IP21 tags. This removes the possibility of gaps in the IP21 history request due to values pending in the history queue. Regarding case 2, it is possible to configure IP21 history compression such that no values are stored for periods of time where the incoming values remain within the compression limits (i.e. within the box car slope limits). For versions prior to v8.7, occasional gaps could occur in the calculated result due to skipped values that were compressed out. For version v8.7 and higher, no gaps will occur, however Aspen Calc might appear to enter store mode if multiple periods of skipped values occur. Since Aspen Calc relies on samples stored in history, it is important to configure IP21 compression properly. This is especially important since Aspen Calc’s now expects values in history to trigger the scheduled store-and-forward calculations. There are two ways to handle compression for tags that are used as inputs to calculations: 1. Consider disabling IP21 history compress altogether for tags used in Aspen Calc store-and-forward calculations. This is done by setting a record’s IP_DC_SIGNIFICANCE value to undefined (specify a single question mark (?)). 2. Consider using compression settings that force values to be stored into history at a frequency close to that of the Aspen Calc calculation. This is done by setting the IP_DC_MAX_TIME_INT to an interval that is close to that of the calculation schedule. For example, if a calculation is scheduled at one minute intervals, then the IP_DC_MAX_TIME_INT value should be something like, “+000:01:00.0”, or shorter. Troubleshooting Tips 1. Calculations that have an output value is updated inside an “IF” statement can cause severe performance issues. This is because Calc uses the timestamp of the last output value written to determine the last successful calculation. If the output tag never gets updated, then Calc will continue to process all input samples on each processing loop. Over time, the size of the input samples will grow such that Calc will process a maximum of 10,000 samples each time. This includes reading 10,000 sample from history, then running the calculation 10,000 times for each sample. If there are multiple calcs designed this way, then it gets worse. It is important to design S/F calcs such that at least one output value is updated for each scheduled interval. If you do not want to update the primary output, then create a dummy secondary output for the purpose of keeping the schedule S/F calc synchronized. 2. It is not recommended to have active store/forward calculations that are in error for several reasons: a. One is because partial record updates could still occur. Just because the calculation is in error does not mean that it did not write to the IP21 record. For example, you might see a calculation error that reads, “Data changed but no history generated”. The key point is that “data changed”. So, this means that the calculation did write to the fixed area IP_INPUT_VALUE field, but the subsequent attempt to write to history failed. IP21 is not a transactional database, so the fixed area fields, such as the IP_ALARM_SATE field, will be processed without a rollback due to subsequent errors. b. The other is due to potentially substantial performance degradation as the calc might continuously process many history samples over and over on each calculation schedule interval. For all errant calculations, the customer should perform one of these actions: Fix the cause of the error. In the above case, it is likely that the calculation is attempting to write values with timestamps that are earlier than the XOLDESTOK time value. There is a tool that can be used to update this for a record Remove the errant calculations from scheduling. 3. Calculations that generate errors can slow the processing of scheduled calcs. This includes calcs that contain On Error Resume Next. If there are calcs that contain unresolved errors, then they should be removed from scheduled processing. 4. Although it can be time consuming, one way to find the problem calcs is to remove all calcs from scheduling, then add them back one by one. But, do not add calcs that have unresolved errors, as errant calcs should never be scheduled. At some point, you should find the calcs that are causing the performance issues (if any). Keywords: None References: None
Problem Statement: Why is the downstream pressure at a node (connector or tee) higher than the upstream pressure?
Solution: In Aspen Flare System Analyzer the user has the option to display the calculated results on either a "Static Pressure" or "Total Pressure" basis. Displaying the calculated static pressure allows the user to observe instances of static pressure recovery. Static pressure recovery usually occurs in sections of the flare system where there is a change in line size; more specifically, an increase in the line size. The total pressure accounts for both the dynamic and static pressure components in the system. When the calculated pressures are displayed as total pressures, the downstream pressures should not be larger than the upstream pressures. If you wish to switch between displaying Static and Total Pressure, simply do the following: 1. Select Preferences from the File Menu. 2. Check/Uncheck the "Display Total Pressure" checkbox. 3. Close the Preferences view and return to the previous view. Keywords: static pressure, total pressure, pressure References: None
Problem Statement: Is there any reference manual documenting methods and properties user can use when writing VB Scripts for Aspen Custom Modeler?
Solution: You can find more information and examples on Scripts and automation methods and properties in online help, -Under Aspen Custom Modeler®, Automation Keywords: None References: . Key Words Automation Methods, VB script
Problem Statement: How do I aggregate time? I want to know how long a tag exceeds a threshold value. For example, for a one day period, how much of the day was the temperature above 70 degrees F?
Solution: The following query determines the amount of time a tag named A1113E being scanned every 30 seconds is above 70 between '08-JAN-19 08:30:00' and '09-JAN-19 08:30:00'. You can adapt the query by changing the variables frequency, threshold, tagname, starttime, and endtime. local frequency int; local threshold real; local totaltime int; local totalvalues int; local tagname record; local starttime timestamp; local endtime timestamp; frequency = 30; -- Cim-IO scanning frequency in seconds threshold = 70; -- Limit to check tagname = 'A1113E'; -- Tagname to check starttime = '08-JAN-19 08:30:00'; -- Starting time endtime = '09-JAN-19 08:30:00'; -- Ending time totalvalues = (select count(value) as totalvalues from history where name = tagname and period = frequency*10 and value > threshold and -- change > to <, =, >=, <=, or <> for -- other limit checks. ts between starttime and endtime); totaltime = frequency * totalvalues; write 'totaltime = '||cast(totaltime as char using 'dt12')||' totalvalues = '||totalvalues; Keywords: Aggregate Time Capability SQLplus script References: None
Problem Statement: What can cause problems when using PIMS Excel files that reside on OneDrive?"
Solution: Working with Aspen PIMS models that are located in OneDrive is not recommended, this workflow is not tested and there may be issues due to network delays or connetion problems. It is also known that if the OneDrive name has a comma (",") in it, then there will be problems reading the files. Best Practices We recommend files to be kept on the machine with PIMS installed. To use the Aspen Petroleum Supply Chain applications, we recommend to work in a supported environment, the supported environment details can be found in the following link: https://www.aspentech.com/en/platform-support Keywords: None References: None
Problem Statement: This article explains the difference between ‘System Time’ and ‘OPC Server Time’ when specifying the timestamp origin for Aspen Cim-IO for OPC.
Solution: OPC Servers maintain a cache containing values, timestamps, and quality statuses for each item scanned from the process. Aspen Cim-IO for OPC fetches information from the cache to send to Aspen InfoPlus.21. The value and timestamp for an item in the cache changes when the value for the item changes in the process. When you specify ‘OPC Server Time’, Aspen Cim-IO for OPC returns the timestamp for an item that is stored in the OPC cache. If the value of the item does not change, then the timestamp of the Aspen InfoPlus.21 tag does not change either. If you specify ‘System Time’, then Aspen Cim-IO for OPC uses the clock time of the Cim-IO server as the timestamp of the value read from the OPC cache. As a result, the time stamp of the item will change in InfoPlus.21 even if the value of the item remains constant in the OPC cache. Keywords: Timestamp source OPC Server Time System time References: None
Problem Statement: The AspenTech Excel Add-In, provided with Aspen Process Explorer, allows you to both read from, as well as write data back to, our Aspen InfoPlus.21 database. Most of our customers manually use the Pull-down menus, within Excel, that will appear once they have attached the Aspen Process Data Add-In to Excel. You can automate most of the procedure that will allow you to build (and save) a normal spreadsheet, complete with normal Excel calculations. Then you will be able to manually change some of the values in some of the cells and then finally, automate the sequence of the following: Force the Excel spreadsheet to recalculate. Perform an AspenTech "Data Entry Read" Perform an AspenTech "Data Entry Write". Step 3 would read the cells updated in Step 1, and the template built from Step 2, and then write values into the database.
Solution: First, get into the Visual Basic Editor in Excel via Macro on the Tools menu. Once inside the Visual Basic Editor, select Keywords: None References: s on the Tools menu. Then select AspenProcessDataAddin The solution is then a simple Excel Macro, that might look something like this:- Sub AspenTest() Application.MaxChange = 0.001 ActiveWorkbook.PrecisionAsDisplayed = False Calculate ATEntryRead ATEntryWrite End Sub For additional information, please read the Aspen Add-in help, in the section: Using Process Data Add-in Functions in Visual Basic. KeyWords: Add-Ins Macro Excel Read Write Calculate
Problem Statement: A transfer record is a record defined by IOGetDef, IOLongTagGetDef, IOLLTagGetDef, IOUnsolDef, IOLongTagUnsDef, IOLLTagUnsDef, or IOGetHistDef. If the io_data_status_desc field of an occurrence in an Aspen InfoPlus.21 Cim-IO transfer record has a value of "Invalid Tag", then the string contained in the field IO_TAGNAME is not a valid address in the process instrumentation. This slows the processing of the transfer record because Aspen Cim-IO makes requests to the process instrumentation for a non-existent address. The address contained in the field IO_TAGNAME should be corrected, or the occurrence should be removed from the transfer record. Removing occurrences from transfer records with many invalid tags can be time consuming. Attached to this article is a query that removes all occurrences with "Invalid Tag" in the field io_data_status_desc from a transfer record.
Solution: First make a copy of the Aspen InfoPlus.21 database in case the query has unexpected results. Place the contents of the file RemoveInvalidTagsFromATransferRecord.txt attached to this article into the AspenTech SQLplus query writer, and execute the query. The query prompts for the name of the transfer record and then confirms that you want to delete all the occurrences with invalid tags from the transfer record. After receiving confirmation, the query determines which occurrences have the field io_data_status_desc field set to "Invalid Tag", sets the field IO_RECORD_PROCESSING to OFF in the transfer record, removes the occurrences having invalid tags, and then sets IO_RECORD_PROCESSING back to ON. Keywords: Invalid Tag io_data_status remove occurrence query References: None
Problem Statement: What happens if you have a negative stream in a #MIX calculation in your Units workbook in APS?
Solution: The mix function in APS excludes property values of streams (or tanks) that have negative volumes. For example, if streams A, B, and C are mixed to produce C, where: Vol P1 A 100 10 B 2 101 C -1 1000 Expected D 101 2 i.e. (100*10+2*101-1*1000)/(100 + 2 - 1) Actual D 101 11.784 i.e. (100*10+2*101)/(100 + 2) So be careful if this is what you would like to do in your APS model. Under #MIX, the negative volume would simply ignored. Keywords: None References: None
Problem Statement: Often times, clients with custom records containing a history "value" field, want to aggregate that field as is possible by using the AGGREGATES and HISTORY pseudo tables. However, the default history "value" field that is used with this table is either IP_TREND_VALUE or TREND VALUE. Therefore, how does one change from the default FIELD_ID of IP_TREND_VALUE or TREND VALUE to use their appropriate history "value" field?
Solution: The description of the FIELD_ID in the online help and/or SQLplus Users Manual is as follows: The name of the field for which aggregate statistics are calculated. The name is converted to a Field ID and so records that have a field with the same Field ID but a different field name are also selected. The default field ID is 24190000 (hexadecimal) or 605618176 (decimal), which is the field ID of IP_TREND_VALUE and TREND VALUE. The following information clarifies Aspen's documentation on how to use the FIELD_ID in a WHERE clause if a non-default is desired. There are two ways to use the FIELD_ID in a WHERE clause to specify a non-default history value field. The first, and easiest, is to use the field name as follows: SELECT ts, avg from AGGREGATES WHERE name = 'mycust' and field_id = FT('mycusthval'); In the above example, the custom record name is ''mycust'' and the history value field used for the average aggregation is ''mycusthval''. The second, more complicated, way uses the actual field_id number. The FIELD_ID needs to be specified in the WHERE clause as a DECIMAL number instead of the actual field name as we used above. The field id found in the Definition Editor for the history "value" field is in hex. Four zeros need to be appended on to the end of it and then converted to decimal before using it in the query. The field id can be found from the Definition Editor by doing the following: Double click on your definition record from the list of the possible definition records that are defined for your system, and a list of fields from this definition record should now appear. Your history "value" field should reside in a repeat area. Double click on that repeat area sizing field to expand and see the fields. Right click on the field that corresponds to the history "value" field needed for your query and select Properties. On this screen the Field Number as a hex number is displayed. Append on four zeros to the end of this hex number and convert it to a decimal number. This is the number to be used in your WHERE clause to specify your custom record''s FIELD_ID. For example: Suppose the field id you find for your custom history value field in the Definition Editor is 3612. Appending 0000 on the end gives you 36120000. Converting that to decimal, you get 907148312. This is the number used in the where clause as follows: SELECT ts, avg from AGGREGATES WHERE name = 'mycust' and field_id = 907148312; KeyWords: field_id aggregates history Keywords: None References: None
Problem Statement: What are the required steps for updating a network license file that is on an InfoPlus.21 server?
Solution: Some InfoPlus.21 servers will host a Software License Manager (SLM) server with a network license. It is recommended to install the new license on or after the license’s Birth date to prevent any InfoPlus.21 outage. To view the license’s Birth date, you may use the SLM License Profiler, refer to KB 22014. Double-click the new license file (.slf) and the Aspen License Installer will launch and automatically install the new license. You can use the SLM License Profiler to verify the new license is installed. Verify the Birth and Expiry dates match the new license. If the existing license expired before installing the new license file, then InfoPlus.21 will go into Grace Period or Denial State. You will be required to stop and start the InfoPlus.21 Database if InfoPlus.21 could not regain the license after installing new license. To read more about the different InfoPlus.21 license states and behavior, refer to KB 18640. NOTE: InfoPlus.21 does not require a database restart if the license file was installed while InfoPlus.21 was in License Granted state. Keywords: IP.21, SLM, MSC, MES, Manufacturing References: None
Problem Statement: How to disable the internal over-rides implemented by the SmartStep/Calibrate engine.
Solution: The procedure is different for ACO and RTE (DMC3) controllers. For ACO Controllers: When performing step testing using either SmartStep or Calibrate, the DMC engine would add internal over-rides to protect against unsafe entries (as determined based on process conditions and models) by operators and engineers. These include: 1. User Target change for an MV that might knock a CV outside the test limits (Operator Limit +/- Test Margin). 2. Internally set Move Resolution to avoid against small move resolution values entered by users. The APC engineer working on the controller could choose to disable internal over-rides, thus allowing users to enter desired User Target values and Move Resolutions. This can be done by adding an Input Calculations to the controller that read: STCMFLAG = 1 This disables the validation of User Target STCORRECT = -1 This allows the engine to bypass the internally calculated MOVRES Note that STCMFLAG and STCORRECT are internal engine parameters that do not need to be defined by users. The engine should automatically recognize these parameters in the input calculation and disable the over-rides. To disable these overrides, the user can set the values to be any number other than 1 for STCMFLAG and any number other than -1 for STCORRECT. If it is desired to disable/enable these overrides online through the web interface, the user could define a user defined variable in the General section of the controller and use the calculation to set the internal engine parameters to these values. This will allow the users to disable or enable the overrides online. Example- STCMFLAG = USER_FLAG Where USER_FLAG is a user defined variable; = 1 to disable user target validation; <> 1 to enable user target validation. STCORRECT = USER_CORRECT Where USER_CORRECT is a user defined variable; = -1 to bypass the internally calculated MOVRES; <> -1 to bypass internally calculated MOVRES For RTE controllers: These parameters no longer exist within the controller configuration, but they can still be used for DMC3 controllers. If these parameters are required for the controller operation, the user needs to define them via user define entries and set their values via Input calculations. In the example below, both parameters are defined under general entries and the value for STCMFLAG is set via input calculation. NOTE: Once the user is changing the user target, there is an additional constraint in smart step and calibrate. The user's target change needs to be larger than 1% of STMVMAXSTEP to be implemented (in transformed space should a transform be used for the MV). Keywords: SmartStep Move resolution User Targets References: None
Problem Statement: When the user is running a large PIMS model with a SQL database, there is a quicker way to extract the qualities of streams that have a very different starting initial guess and final recursed value. The following query would give us a quick understanding of which properties have changed significantly after the LP is solved:
Solution: After running the desired cases in the model, run the following query in SQL: SELECT A.SolutionID, A.CaseID, Value, InitialValue, C.Tag AS stream, B.Tag AS quality, abs(InitialValue - Value) AS rec_difference FROM PrStreamQuality AS A LEFT JOIN PrQuality AS B ON A.QualityID = B.QualityID LEFT JOIN Prstream AS C On A.StreamID = C.StreamID WHERE InitialValue NOT IN (-999, 999) AND abs(InitialValue - Value) >= 2 The table is set to display recursion properties that changed by 2 or more. You can also tweak the statement to show other quantities of change. The query does not include initial assay guesses, which are the 999s in table PGUESS. Keywords: None References: None
Problem Statement: Selling products in MPIMS could not appear to be straightforward. This article talks about some of the FAQs about how to set up correct structures in local and global model for selling different products in MPIMS.
Solution: 1) it necessary to bring all the product to a local market and then sell them in global structure? Yes, it is necessary. The BUY and SELL tables in the local models are only for internal data checking. The products need to be brought to the global model and buy/sell them in tables DEMAND and SUPPLY. 2) There are some products (having almost zero price), can these products be sold at local site using T. Demalloc i.g. product ASH from Local model A, defined row in T. demalloc as ASHAA (however there is no local market A defined) Does PIMS understands that "ASH" is being sold at A site only and corresponding price and limits can be provided in T.Demand. DEMALLOC is a map of which plants can supply which products to which markets. DEMAND then sells products to specific markets. But DEMALLOC does not actually sell the product for you. You also need to provide the price and limits in table DEMAND as well. PIMS cannot understand by itself. 3) In the Help section I see that, "...AA" is a row being used to transfer all the products from Local model A to Local market A. This is not a requirement but ease of transferring all the products in one go. Please confirm. This is the help file for DEMALLOC. This is correct. "..." means all products in the model, and "..AA" means all products to market A from plant A. 4) I am using T. LOCTAGS in order to differentiate a product coming from local models. So, in T. Demalloc where origin and destination has been provided do I need to use new tag or the old tag from local model? You will need to use the new tag defined in table LOCTAGS. In the global model, you will always use the global tags. For example, if you have "REG" defined in the global model, which is mapped to "URG" in one of the local models, then in table DEMALLOC, "REG" is used because it is defined in the global model already. 5. T. GBLNMIX is used to create a blend in MPIMS. If the blend is then being sent to local plant for processing? GBLNMIX and GBLNSPEC are often used together to define global constraints for the products you specify. It is very similar to the regular single plant PIMS. IN GBLNMIX, the rows allow you to specify which local model are you blending the product in. This is the only place we need to tell PIMS which local model is handling the blending process. If you need to transfer finished products from one plant to another, you can directly go to table TRANSFER and follow the syntax in the rows. But in terms of the blending section, only tables GBLNSPEC and GBLNMIX have control on it. Keywords: None References: None
Problem Statement: Microsoft Power BI is a very useful tool for creating user-defined dashboards for data visualization. This tool is very useful for us to visualize what is going on in our PIMS results database. How can users get started with Power BI?
Solution: Download Power BI desktop on your machine from Microsoft store. There are different versions of this tool. The desktop version is free for the users to use. After downloading and installing Power BI, we need to make sure that we have a PIMS model that is already ran. It does not matter if you are using PIMS, PIMS-AO or AUP, and connected with either Access database or SQL server database. After you create a blank dashboard, you will need to connect to the database that has the results tables of your model. In the following example, Power BI is connected to SQL server database which contains the results from running Volume Sample: You can directly click “Get data” button or choose from the drop-down menu. It will give you the same results. Enter your server and database information in the text boxes. You will have 2 connection modes available: Import and DirectQuery. Import is where Power BI will copy all tables from your database directly, and keep all data types for each of the table as the same as it is in the database. DirectQuery gives you the opportunity to use query to choose what tables you would like to select, and you could also change the data types and table information by executing queries. Both options will take you to the navigator pane next: You could also preview each of the data tables and choose if you would like to load into Power BI or not. There are two types of data tables in PIMS: Pr and RW. Pr is the normal results data tables and RW contains union queries to combine desired data tables to select the desired table outputs. Next, PowerBI will load the data tables that you just selected and stored to the “Fields” section. This is the first step towards creating effective and beautiful reports with Power BI. We will play around examples in the next KB: “Creating dashboards with Power BI”. Keywords: None References: None
Problem Statement: How to address "Encountered an improper argument" message when launching Results0000.flo for Periodic Case
Solution: If you are working with a Periodic model and you run into the following message: Make sure you are not using special characters for example “ / “, like in the following example for the first period of one case the PPIMS Volume Sample located in C:\Users\Public\Documents\AspenTech\Aspen PIMS\PPims\PPIMS Volume Sample I have set my periods name as follows: When I try to launch my results classic flowsheet I will observe the following result: This will lead to the Encountered an improper argument error message that can be resolved by removing the character as in the following image: When I try to launch my results classic flowsheet I will observe the following result: This will allow you to properly launch the flowsheet Resul000001.flo Keywords: None References: None
Problem Statement: How to model Molecular Sieve or Adsorption process in Aspen HYSYS ?
Solution: Unfortunately, in Aspen HYSYS there is not any unit operation capable to model a molecular sieve. If you want to include this unit as part of a larger steady state model it would be possible to represent it by a simple Component Splitter model, where you define the splits. Note however that the Component Splitter does not represent a real-life unit but is simply separates components into product streams according to the splits the user specifies. Aspen Plus software also cannot be used for this kind of application since it only runs in Steady State mode and SEP2 block only provides similar function as Component splitter in HYSYS. AspenTech does have a tool called Aspen Adsorption which is designed for this type of application - but this is not part of HYSYS. Further details about ADSIM can be found on our website at https://www.aspentech.com/en/products/pages/aspen-adsorption/ Also, please refer to the online training for Aspen Adsorption software. Creating simple flowsheet https://esupport.aspentech.com/S_Article?id=000040505 Creating cyclic operation (1/2) https://esupport.aspentech.com/S_Article?id=000040578 Creating cyclic operation (1/3) https://esupport.aspentech.com/S_Article?id=000040619 Another option is to use Aspen Custom Modeler(ACM) to develop the required unit operation by using ACM code. Keywords: Adsorption, Molecular Sieve References: None
Problem Statement: What is the significance of area ratio reported in the Overall Summary in Plate Fin exchanger
Solution: An area ratio can be defined for each stream in a PlateFin exchanger. This term is more familiarly used with shell and tube exchangers. It is the ratio of the actual stream heat transfer area to the area required for a specified duty. For two stream exchangers such as shell and tube, the ratio must be the same for both streams and is taken as a simple measure of the acceptability of exchanger performance. An area ratio of above one is taken to mean that an exchanger can more than achieve a specified duty. In a multi-stream exchanger, the position is more complicated, since each stream can have a different area ratio. An area ratio above unity does not necessarily indicate that all the area is in the right place to achieve the desired heat transfer. Nevertheless, the area ratio can be useful as one more parameter indicating how well an exchanger is performing. For a Simulation calculation, the area ratio should in principle be unity. Values slightly different from unity sometimes occur when the overall heat load (based on stream exit conditions) has converged more rapidly that the local stream heat transfer at all points between inlet and outlet. Keyword: area ratio, Plate Fin exchanger Keywords: None References: None
Problem Statement: If an analyst needs to study the life cycle of semi-finished blend stocks, how are the properties of the inventoried blend stocks are recursed?
Solution: All streams in PIMS are recursed out of a fixed set of ways. So it really depends on how the stream is generated. For example, if the blendstock is directly bought, then the properties are going to be fixed in BLNPROP, and directly used in blending. If the blendstock is produced in a submodel, then there needs to be initial guess value for its properties in PGUESS to trigger recursion, and in each recursion pass, those property values get updated closer to the real value. Once the recursed value is within the absolute tolerance, the recursion process stops and that will be the final property values.Then those values will be used in blending to produce final products. Or you could have property calculation formula, PCALC, or ABML/UBML defined in your model, meaning the properties of stream A are depending on properties of stream B through some sort of relationship. Keywords: None References: None
Problem Statement: Run-time Error 13 on datasheet and the Datasheet Definer cannot be used.
Solution: Datasheet Definer might be reading some boolean variables from the Operating System strings according to the language enabled in Excel and/or Windows. Mismatch type when opening it in DSD. Also, the Datasheet add-in should remain enabled and all the ABE custom properties should be available (class view names, fields, links, group names, etc). The issue was that there were more Boolean strings in French in the ABE custom properties, which are not visible through the Excel´s and Datasheet add-in user interfaces (i.e. we´ll see radio buttons that represent a boolean configuration, but we´ll not be able to see “false”/”faux”). This is why we see the issue in DSD and not in Excel, because the values in French are not recognized as a valid in ABE and thus the Mismatch type error. How to recover the original .xlsm files with French Boolean values: Note: This is not the previous workaround of copying the pages to a new file and losing ABE properties. 1) Find all the Boolean strings in the comments. Since these values are visible in the Excel/DSD user interface, these can be listed in this way: - Open the .xlsm file in Excel (no need to open with DSD). - Press Ctrl+F to open Find and Replace dialog. Set the fields as follows: - In Find What: Faux Within: Workbook Search: By Rows Look in: Comments - Press Find All button. This should be done for “vrai” string as well. A list will be displayed with all the comments containing these values. Unfortunately, Excel does not provide the option “Comments” in the Look in field in Replace tab. Here we can use the macro provided by Ed (Step 2). 2) Replace Boolean strings in French for English strings - Open the .xlsm file in Excel (no need to open with DSD). - Click on Developer tab, then click in Visual Basic. - In the Project view, right click in the name of the datasheet, select Insert>Module. - Open the new Module and paste the macro code (Attachment: ExcelReplaceCommentsMacro.txt) - Click run to apply the macro in all the sheets. - Close VB editor. - Save the changes in the datasheet. After this, we can verify again following step 1. If no more strings in French, the module that contains the macro can be deleted and changes be saved. 3) Find and replace French strings in ABE Custom Properties. - Install 7-Zip. -Create a backup of the updated .xlsm file (just in case). - And rename it by adding at the end .zip (Sabic Packaged Equipment Type 6 Datasheet_Rev 0.xlsm.zip) - Right click on the .zip file, select 7-Zip>Open archive. (this is done inside the zipped file to avoid corruption the file, this would not allow to open it even in Excel) - Go to docProps folder and right click on custom.xml, click on Edit. - The file will be opened in Notepad. Go to Edit>Replace…, enter Find what: Faux, Replace with: False, click on Replace All. - Click on File>Save. Close the 7-Zip window. - Rename the .zip file by deleting the .zip string and leaving the .xlsm extension (Sabic Packaged Equipment Type 6 Datasheet_Rev 0.xlsm.zip). This should be done for “vrai” string as well. 4) Open the recovered file in Datasheet Definer. Now the file should be clean of French Boolean strings. Open the Datasheet Definer, connect to a Workspace and open the recovered file. All the ABE properties should be there. Keywords: Run-time Erorr 13, Datasheet Definer. References: None
Problem Statement: How does ABML correlations work in recursion in PIMS? Do we need to do define anything else for the correlation to work in submodels?
Solution: ABML correlations take care of any relationships that you would like to define in the blending section of the model. By default, it will NOT take care of the same properties you define in submodels used in recursion. For example, a common relationship is between RVP and RVI. We can use CORR15 and CORR52 in ABML to define RVI = RVP ^ 1.15. However, this relationship works in blending, which means if a stream BUT’s RVI property is recursed in submodel SNC4, RVP is not going to recurse along with RVI. Even if you have the ABML correlation defined correctly, it still doesn’t “back calculate” the RVP value for BUT in the final report. You have BUT’s RVI defined in PGUESS, and RRVIBUT defined in submodel SNC4, which means RVI is recursed in the model. But this is not enough to also recuse RVP and report the correct value of it in the final solution. We can change this situation by doing the following three things: Define BUT’s RVP in PGUESS. This will trigger both RVP and RVI in the recursion process. Define RRVPBUT in submodel SNC4. Now in the submodel, you will have 2 R rows, RRVIBUT is used in the recursion calculation. RRVPBUT is a dummy recursion row with a -999 in the BUT column. This is just for the “dummy recursion” to take place. Define RRVPBUT in table ABMLSUBF with a coefficient of 1 under column SNC4. This table tells PIMS what ABML correlations are used in submodels. After doing these three steps, you will see RVP is reported correctly in the submodel section of the final report. This will work not only for this example, but also any ABML correlations that you would like to implement to the recursion process. Keywords: None References: None
Problem Statement: If your PIMS model is in SQL database, there are some queries that you could execute to view the economics of your different models without opening the full solution report of each case. This way is easier because imagine you have more than 100 cases, after running all cases, you would need to open all full solution reports to see the economic summary.
Solution: After running your cases in PIMS, go to the SQL database that stores the results data directly. Start a blank query, and execute the following: SELECT A.SolutionID, A.CaseID, C.Tag, B.Description, B.MinValue, B.Activity, B.Cost, B.MarginalValue FROM PrCase AS A LEFT JOIN PrPurchase AS B ON A.SolutionID = B.SolutionID LEFT JOIN PrStream AS C ON B.Description = C.Description WHERE B.MarginalValue < 0 ORDER BY MarginalValue; SELECT A.SolutionID, A.CaseID, B.Description, A.Activity, C.ObjectiveFunction FROM PrEconomicSummary AS A LEFT JOIN PrEconomicSummaryType AS B ON A.EconomicSummaryTypeID = B.EconomicSummaryTypeID LEFT JOIN PrCase AS C ON A.SolutionID = C.SolutionID WHERE (B.Description LIKE 'Feedstock%') OR (B.Description LIKE 'Product%') OR (B.Description LIKE 'Utility%') This will return a table with all feedstock purchased or sold hitting the minimum bound with a negative marginal value. Usually these streams users really need to consider changing their constraints. This will also return a table that summarizes feedstock purchase, product sales, utility purchases, and utility sales in your model. It works for both PIMS DR and AO. If you have a PPIMS, MPIMS, and XPIMS model, then you could also incorporate inventory, transfer, and global economic summary tables to your results. Keywords: None References: None
Problem Statement: After adding a new CDU mode to a model in the model tree and ASSAYLIB and you would like to change the Excel data sheet attached to ASSAYLIB in Assay Manager instead of PIMS, is the change automatically synced back to PIMS? Do you need to perform any other extra steps?
Solution: The answer is no. The user does not need to perform any other steps after adding the Excel data table in Assay Manager. The change will be automatically synced back to PIMS. Some users might find that after closing Assay Manager, the model tree does not change and still shows the old Excel sheet under ASSAYLIB. However, this does not mean the change hasn’t taken place. The old Excel sheet is already substituted by the new one, and the user can refresh the model to see that. Even if the user doesn’t refresh and run the model, you could still see the results getting updated. So there’s no other steps the user needs to perform after closing Assay Manager. Keywords: None References: None
Problem Statement: We have seen in multiple cases when adding/modifying occurrences to IoGet or IoUnsol records without turning OFF the IO_RECORD_PROCESSING field first can cause problems resulting in Aspen Cim-IO failure especially when using Aspen Cim-IO Store and Forward. This solution describes what is considered by AspenTech to be the Best Practice when adding new occurrences to Get or Unsol records
Solution: If the Get or Unsol record is left turned ON during maintenance, each time a new occurrence is added to the repeat area, the Scan list or the Unsol list is resent or re-declared on the Aspen Cim-IO server. Since you may have many hundreds of occurrences already configured in the Get or Unsol record, each re-send or re-declare may take some time to complete. In the meantime, each time you add another occurrence to the record, the list is resent or re-declared again . All these re-sends/re-declares can pile up on the Aspen Cim-IO server and cause the tag initialization to fail. It may be difficult to determine the exact cause of failure from the Cim-IO Message log. The best practice for Get or Unsol record maintenance is to turn the Get or Unsol record OFF before adding new occurrences. When you are done adding occurrences turn IO_RECORD_PROCESSING back ON. This will result in single update of the Scan list or one re-declare of the Unsol list which will generally produce better results. While the Get or Unsol record is switched OFF for maintenance, you WILL lose data for the tags listed in that record. To minimize this loss of data, add the new occurrences all at once with an Aspen SQLplus query or by using new Aspen Configuration Wizard (introduced starting from V7.3) rather than adding them one by one in the Aspen InfoPlus.21 Administrator. Update With the release of Aspen InfoPlus.21 V10, this recommendation is now enforced and you will not be able to make changes to the repeat area of the transfer records without first turning OFF the IO_RECORD_PROCESSING. See Why am I unable to update a transfer record's repeat area in a V10+ Aspen InfoPlus.21 database? Keywords: Unsol record Get record scan list maintenance occurrences References: None
Problem Statement: How is the information processed when it comes from Excel Integration Utility to Import Assays Dialog box in Aspen Petroleum Scheduler?
Solution: This tech tip explains how is the information processed when it comes from Excel Integration Utility to Import Assays Dialog box in Aspen Petroleum Scheduler? For this example we have used demo model located in C:\Users\Public\Documents\AspenTech\Aspen Petroleum Scheduler\Demo\Access and Excel Integration Utility default template. Information on table PIMS_ASSAY_XREF Information in PIMS_CRUDE_XREF Information in PIMS_PROP_XREF, this will be mapped according to the information previously provided Information in WholeCrudeProps Information in Assays Tables Notes: Assay numbers are associated with a CRUDE unit by appending the assay number to the end of the crude unit name. The mapping of the assay streams to the output streams is a one to one mapping of available cuts to the stream names defined as products. Crude Units built in limits is 99. The Maximum number of Assays is 99. When adding a new crude cut is important to build it according to the existing structure. Keywords: None References: None
Problem Statement: What is the purpose of ZERO_EVENT_TOLERANCE keyword in config table?
Solution: This tech tip explains what is the purpose of ZERO_EVENT_TOLERANCE keyword in config table. ZERO_EVENT_TOLERANCE is a config setting that used in previous versions that had the same function that the Zero-Value Tolerance setting works in current versions. The Zero-Value Tolerance has been added to control the display of events with quantities close to zero on the Gantt chart. You can control the display of events with quantities from 0 to 0.5. The Zero-Value Tolerance option is found on the Event Filter tab associated with the gantt and Trend chart options dialog box. Keywords: None References: None
Problem Statement: How can I determine the time required to increase the pressure in a vessel from P1 to P2?
Solution: Pressurization and depressurization analyses can be performed with help of dynamic modeling. In this example, a steady-state model is converted to dynamic mode to determine the time needed to take the pressure in the vessel from 0.4367 bar_g (P1) to 2 bar_g (P2). A stream comprised of light hydrocarbons (C1 to C6) and a vapor fraction of 0.5, is fed to a separator. The separator has a unique feed stream and both vapor and liquid product streams. These streams are connected to valves, which isolate the inlets and outlets of the process, as shown in the image below. The Peng-Robinson cubic Equation-of-State (EOS) model is used as the property method. The ‘Event Scheduler’ tool provides a means of automating dynamic models by defining actions that are triggered once either a condition in the model is met or a specific simulation time is reached (to know more about this tool, please refer to the Aspen HYSYS Help Menu Guide (use the ‘F1’ key)). In this example, the Event Scheduler has been defined with a single sequence which contains two events: 1) ‘CloseProductValves’ – This event will be triggered 20 min after the sequence is activated. Both valves downstream of the separator will be fully closed, their ‘percentage open’ value will be set to ‘0.0’. This will cause pressure in the vessel to build up. 2) ‘Pressurization’ – This action will wait until the pressure in the vessel reaches 2 bar. Once the pressure in the vessel hits this value, the integrator will be paused. The model has also a strip chart (plot) set up to show the change pressure in the vessel over time, in other words, how the vessel gets pressurized. To perform the pressurization analysis, follow the instructions below: 1) Open the file attached to this KB Article, ‘SS_Event_Scheduler_V9.hsc’. Note: This steady-state file was built in V9, so if you have a newer version installed on your machine, you will not have any problems at all opening it up. However, for users which might be still working with older versions, the model was also saved as an *.xml file. We recommend you to always upgrade to the latest version available for the Aspen Engineering Suite (AES). 2) Click on the ‘Dynamic Mode’ button on the ‘Dynamics’ tab of the ribbon. Click on ‘Yes’ when prompted to confirm your transition to dynamic mode. 3) Click on the ‘Event Scheduler’ button on the ‘Modeling Options’ section. Left-click on ‘SequenceA’ and next click on the ‘Start’ button. 4) Now hit the ‘Run’ button to start the dynamic run and wait until the integrator is stopped. 5) The dynamic run will be paused at approx. 20 min. 6) You can also look at the historical data of the already defined ‘Vessel Pressurization’ strip chart: Determining the time required to go from State 1 (S1) to State 2 (S2) (in this example, from P1 to P2), can be done following the steps described in this example, although, of course, it is not the only means for doing so. Keywords: Event Scheduler, Pressurization, Events, Strip Chart, Dynamic Run, Time. References: None
Problem Statement: If an analyst needs to study inventories life cycle, does PIMS have an option to easy formulate the end of cycle requirement thus the inventory at the start of period 1 are equal to inventories at the end of the last period?
Solution: There is a way to set the start inventory of period 1 to be equal to the end inventory of last period. In PIMS, the way to set up inventory is through table PINV. It allows you to set up acceptable opening and closing inventory for each period and price. Because PIMS is a LP based tool, it would want to optimize the inventory for you after you enter the constraints. So PIMS naturally doesn't allow you to do that. That being said, you can set any type of user-defined constraints in table ROWS. Everything you have put in there will be equations. In your example, you will need to set Starting Inventory - Ending Inventory = 0. This equation will add one more constraint to your model, but keep in mind, it might cause infeasibilities. Keywords: None References: None
Problem Statement: How to switch between .mdb file and .accdb file for Access database when generating results for PIMS?
Solution: Please go to Tools -> Program Options -> General Program Options -> Output Database -> Access Version. If you choose "Access 2000/2003", it will produce a .mdb Access database. If you choose "Access 2007", a .accdb file will be produced. Keywords: None References: None
Problem Statement: When you are working on an Aspen HYSYS V10 simulation the ABE activated Datasheet option does not work properly and it does not allow you to select an ABE workspace.
Solution: In order to activate the Datasheet option in Aspen HYSYS all units need to be solved in the simulation, otherwise it does not allow to select an ABE workspace. Keywords: Aspen HYSYS, Datasheet Option. References: None
Problem Statement: What is the procedure to retrieve and publish datasheets and diagrams from ABE to SPF (SmartPlant Foundation)?
Solution: Attached there is a video which shows the step-by-step procedure of how to retrieve and publish datasheets and diagrams from ABE to SPF (SmartPlant Foundation) Keywords: SPF, SmartPlant Foundation, Retrieve, Publish. References: None
Problem Statement: Why Activated Energy Analysis makes mistake when determining a hot stream or a cold stream?
Solution: Normally, if a stream is heated (cold stream), its temperature will rise. And if a stream is cooled (hot stream), its temperature will drop. Activated Energy Analysis determines a stream to be a cold stream or to be a hot stream based on its temperature change. This simple rule works well in most cases. However, it may produce problems in some special cases as shown in attached example. From Feed to V, the stream is heated (cold stream) as B2 has a positive duty. However, the stream temperature drops because of significant depressurizing. Since the temperature drops, Activated Energy Analysis treats it as a hot stream, but this produces error as specified hot utility cannot be used for a hot stream. To fix this issue, you only need to add a valve before the error block, so that we can separate depressurizing from heat exchange. This time Activated Energy Analysis will handle the stream correctly based on the temperature change. Keywords: Aspen Plus Activated Energy Analysis AEA Negative Energy Saving References: None
Problem Statement: How does Aspen Process Economic Analyzer (APEA) select and define fluid selections from the imported simulator/stream data?
Solution: The Aspen Icarus framework only loads the 4 components with the largest flows from the simulator to determine its component mixture specs. These values are then processed to see if they are mappable to an Icarus fluid (see pureref.doc in Program\Sys\Doc directory). If a single component is comprised of 35% or more of the stream flow, then it is made the primary fluid component of the stream. If any of the 4 major components are not mappable to capable Icarus fluid, their flow is added to that of the Primary Fluid Component's flow in the stream. For cases where the stream is not composed primarily (35% rule above) of a capable fluid then, the fluid classification rules are applied to determine a characteristic Primary Fluid Component for the stream. This classification uses the properties of the stream such as viscosity, density, molecular weight and thermal conductivity, as well as defined Primary Fluid Classes for the mappable components in the stream (example Toluene-> Aromatic Liquid) to determine the fluid class. If no Primary Fluid Component (PFC) can be determined by using the mappable components in a stream then the following criteria is used to estimate a PFC for the stream: Note: Viscosity in centipoise and density in grams/cubic centimeter Solid Phase PFC = Solid Liquid Phase MW 17.65 - 18.35, PFC = Water MW 25.0 - 150.0 and density 0.75 - 1.0 and Viscosity <= 0.325, PFC = Aromatic Liquid MW 75.0 - 150.0 and density <= 1.0 and Viscosity <= 0.325, PFC = Light Hydrocarbon Liquid MW 125.0 - 200.0 and density <= 1.25 and Viscosity <= 0.325, PFC = Medium Hydrocarbon Liquid MW >= 200.0 and Viscosity 1.0 - 100.0, PFC = Heavy Hydrocarbon Liquid MW >= 200.0 and Viscosity >= 100.0, PFC = Very Heavy Hydrocarbon Liquid MW 60.0 - 100.0 and density 1.5 - 2.0 and Viscosity 0.4 - 0.45, PFC = Inorganic Acid MW <= 125.0 and density >= 1.0 and Viscosity <= 0.325, PFC = Organic Acid MW <= 125.0 and density <= 0.75 and Viscosity >= 0.325, PFC = Alcohol W <= 25.0 and density 0.75 - 1.0 and Viscosity <= 0.325, PFC = Water MW >= 25.0 and density >= 1.0 and Viscosity 0.325 - 1.0, PFC = Miscellaneous Inorganic Liquid Otherwise PFC = Light Hydrocarbon Liquid Vapor Phase MW 17.65 - 18.35, PFC = Steam MW <= 50, PFC = Inorganic Gas MW <= 75, PFC = Hydrocarbon Gas MW <= 90, PFC = Halogenated Gas Otherwise PFC = Inorganic Gas KeyWords simulator fluid primary Keywords: None References: None
Problem Statement: Data collection for Inferential Qualities (IQ) applications is now available with Aspen Watch Maker V11.0 Procedure to collect data for IQ applications for V11.0 is described below. This feature is not included in previous releases, a workaround for older versions is also provided in this article.
Solution: For V11: Add the AspenIQ node to the Online Connection settings on Aspen Watch. Default port used for IQ data collection is 12350. For more information on how to configure online host connection refer to the Aspen Watch Maker Help File - Configuring Online Host Connections for Aspen IQ Applications Data Retrieval. Go to your. iqf Application file and make sure the Aspen Watch Performance Monitoring (AWENB) parameter is set to 1 for each IQ application you wish to monitor. Load and start the IQ Application. Go to Aspen Watch maker and select the IQ node, the new loaded application should be listed there. You can select it and start data collection. Built-in KPIs are available for IQ applications in Aspen Watch. For more details about build-in KPIs for IQ applications please refer to the Aspen Production Control Web Server Help and go to IQ Applications - KPIs, Metrics, and Calculation. For V10 and prior versions: Use the procedure below as a guideline: The Miscellaneous Tags need to use a Logical Devices named IODEVx. In the Watch server configure a Logical Device IODEVx using Cim-IO for Set-Cim (pointing to the Watch / IP.21 database itself) or Cim-IO for OPC (pointing to the Watch / IP.21 database Aspen.InfoPlus21_DA.1). The Cim-IO for Set-Cim / Cim-IO for OPC Logical Device can be configured in different ways, select the method appropriated for your system. Note: If using Cim-IO for OPC (Aspen.InfoPlus21_DA.1) configure the PE_BranchDef, IP_TagsBranch to see the AW_MSCDef. For further information check solution 133869 In the Online (IQ) server configure the Logical Device IODEVx too (add the required entries in the cimio_logical_devices.def and services files). 2. Determine what IQ entries need to be historized. Via Watch Maker create the Miscellaneous Tags (Tools menu, Tag Maintenance, Miscellaneous Tab) with IO Get Record Type: None for each IQ entry. 3. Go to the IQF file, modify the entries to use Cim-IO Device IODEVx, specify the Tagname as the Miscellaneous Tag created, and reload the controller. Note: Depending if Cim-IO for Set-Cim or Cim-IO for OPC is used, the syntax will change (adding a string besides the Miscellaneous Tag name). In the case of Cim-IO for Set-Cim is <Miscellaneous Tag name> AW_VALUE (an space between both sub-strings) and for Cim-IO for OPC is “<Miscellaneous Tag Name>”.AW_VALUE (the Miscellaneous Tag name between double quote marks and a dot between both substrings). ============================================================================== In the following example IQ: NAPHTA95, PR Module, PREDBIAS was configured to use the Miscellaneous Tag NAPHTAPREDBIAS. Cim-IO for Set-Cim is used. Comparing the Prediction Plot against the Miscellaneous Tag plot. Keywords: Inferential Qualities, IQ, Aspen Watch Maker, Miscellaneous Tags, InfoPlus.21, Data Collection References: None
Problem Statement: Is it possible to consider fouling factor in RADFRAC model in Aspen Plus?
Solution: There is not an explicit fouling factor for columns in Aspen Plus at the moment. However efficiencies will be the best way to describe fouling in a distillation column. Assuming the column thermodynamics is modelled accurately, a data fit could actually be used to set the efficiencies. The calculated efficiencies from the data fit would indicate when the column is fouled. Another way of thinking about this is understanding what is going on when a column is fouled. In a trayed column, this would result in sieve holes plugging up, or valves failing. There is an efficiency impact from this phenomena, and likely an impact on both the pressure drop of the trays and the flood factor. Adjusting the system factor in Column Analysis may be another option to adjust this, but without data (it is very difficult to measure flood %), it probably will not have much meaning. Using RADFRAC efficiencies will probably be the most effective way of modelling what is happening. For packing, fouling may cause liquid channeling, and also impacts the overall effectiveness of a packed section. Again, the RADFRAC efficiencies would be the best way to consider this. Keywords: Fouling Factor, RADFRAC, Efficiencies. References: None
Problem Statement: Is there any example file of a furnace used in the silicon carbide production process?
Solution: The attached Aspen Plus in V10 example file contains an example of how to model a furnace used in the silicon carbide production process. This example model treats the furnace as a Gibbs reactor. The properties can be tuned (Gibbs energy of formation in particular) to get more reasonable yields. Aspen Custom Modeler (ACM) would be the right tool for a high-fidelity furnace model. The real furnace operates dynamically (as a batch). The inner part of the furnace near the electrode gets much hotter than the outside section, producing different mix of products. ACM can model the process dynamically, taking into account differences in position (using partial differential equations). Keywords: Furnace, Silicon Carbide, RGIBBS. References: None
Problem Statement: What is the procedure to revise (Submit, Check and Issue) a datasheet and a drawing for SPF (SmartPlant Foundation)?
Solution: Attached there is a video which shows the step-by-step procedure of how to revise (Submit, Check and Issue) a datasheet and a drawing for SPF (SmartPlant Foundation). Keywords: SPF, SmartPlant Foundation, Revision. References: None
Problem Statement: Is there any example KB method for copying mapping ports from MS Excel into ABE?
Solution: The attached KB is an example method to copy mapping ports into ABE. This method will copy object data into ABE attributes. In order to load, compile and link to a workspace this KB method .azkbs file ABE Rules Editor must be used. Also, this should be included under the configuration file found in this directory: C: | AspenZyqadServer | Basic Engineering19.1 | WorkspaceLibraries | KBs. Keywords: AZMethod, Copy Ports, KB. References: None
Problem Statement: What is the procedure to log into a SPF (SmartPlant Foundation) registered workspace and retrieve PBS document from SPF?
Solution: Attached there is a video which shows the step-by-step procedure of how to log into a SPF registered workspace and retrieve PBS document from SPF. Keywords: SPF, SmartPlant Foundation, Workspace, Register, Explorer, PBS. References: None
Problem Statement: Aspen SQLPlus does not have a function to convert a time string in HH:MI:SS AM or HH:MI:SS PM format to 24-hour format.
Solution: The function TimeAMPMtoTime24 attached to this article converts a time string in HH:MI:SS AM/PM Format to 24-hour format. For example, the function converts 12:01:01 AM to 00:01:01, 01:01:01 AM to 01:01:01, and 01:01:01 PM to 13:01:01. TimeAMPMtoTime24 expects a time string formatted as HH:MI:SS xM and returns an eight character time string in 24-hour format. Keywords: Sample query convert time 24 hour format References: None
Problem Statement: After user upgrades PIMS to V11, how can we compare and validate model results from older versions to V11?
Solution: We would need to compare several things from the previous version to the new version. Theoretically speaking, there really shouldn’t be any differences in the two solutions if the model settings are the same. However, make sure to check the release notes of V11 and see what bugs and enhancements are fixed in this new version. If your model from old version are susceptible to any unfixed bugs, that will cause an issue in the final output. The following model is run in V8.8 CP5, and the exact same model is migrated to V11. Check general model settings and make a note of everything that you have changed from default. If you are using AO, make sure to choose either XLP or XNLP and do the same in V11. If you are using AO, make note of all the settings you have tuned in XNLP settings PGUESS confidence level, HQI, nonlinear presolve, and global optimization. Especially in global optimization, make sure you have the exact same multi-start parameters in both versions. In XSLP settings, make note in Advanced 1 tab if you checked improve local solution, use epsilon, and if you used infeasibility breakers. Choose the desired cases and run in V8.8 and make sure to generate the final matrix from the run. Make note of the final solution. Go in V11 and open the exact same model. Make sure to check the model settings and see if everything is the same as previous version. In V11, there are some new infeasibility breakers features. For example, it can automatically add infeasible solution penalties to your model. But if your model is optimal in V8.8, you really don’t have to turn this feature on. So please have that check box deselected. First look at the objective function and see if you have the same (or very similar) answers. It is also a very good idea to do a matrix comparison because the values coming from the matrix from the two versions should be exactly the same if not very very similar. Matrix solution comes directly from the solver, and contains more information than full solution report. But if you would like to search for results in reports, that will also be useful. Copy the matrix output file from V8.8 to V11 test environment. If you used AO to run, it should be an Xmps.xlp file. It will be the same in V11. You can select different trace levels for your matrix. It will return a separate file that looks like this: The overall results is going to show you how many variables do you have for each version. There are also separate sections comparing the equations, linear matrix elements, special ordered sets, etc. It will show you what variables or equations in the two versions are calculated as different values. If you expand on these sections, you should see most of the values have very small differences. If any value that is calculated has a very big difference between the two versions, then we need to take care of those variables/equations. But most of the time, if the model is the same with the same settings, we shouldn’t be worrying about the results. Keywords: None References: None
Problem Statement: How to select Standard Fins?
Solution: There are 4 types of Standard Fins that are available: User can select Standard Fins as the Layer structure on the Application Options tab (Generic Geometry - to model a new plate, both standard and generic - a combination of standard and non-standard types) Then on the Fins tab user can select one of the built-in type of plates. On this page, it’s needed to specify the height and thickness of the plate, the number of plates in one meter of length. Information about this standard plate can be seen on the next page of Generic Geometry. If you want to change some parameters of the plate, it’s needed to return to the Application Option page and select Generic Geometry and then you will be able to change the configuration of the plate. Key words Fin database, Standard Fins, Layer structure, Plate Fin Keywords: None References: None
Problem Statement: What is the maximum number of tags that can be placed in Get, Put, PutOnCOS, and Unsolicited transfer records?
Solution: Aspen InfoPlus.21 has four standard I/O transfer records as follows: Transfer Record Type Definition Records Purpose Get IoGetDef, IoGetHistDef, IoLLTagGetDef, IoLongTagGetDef Reading data from Cim-IO servers Put IoPutDef, IoLLTagPutDef, IoLongTagPutDef Writing data to Cim-IO servers PutOnCOS (POC) IoPutOnCosDef, IoLLTagPOCDef, IoLongTagPOCDef Writing data on change of state (COS) of a particular value Unsol IoUnsolDef, IoLLTagUnsolDef, IoLongTagUnsolDef Requesting periodic unsolicited updates from Cim-IO serve Each I/O transfer record type has three sizes, based on the maximum length of the I/O tag name that will be used to reference the device data point. Type Maximum Number of Characters in IO_TAGNAME Regular 39-character addresses Long (IOLongTag Definition Records) 79-character addresses LongLong (IOLLTag Definition Records) 255-character addresses The maximum number of tags that can be placed in a record depends upon the type of I/O record used. The following table details the maximum number of tags allowable, based upon record type and tag name character limitation. This limitation exists because the maximum record size of 128K bytes would be reached when expanding the repeat areas to the following sizes. Regular Long LongLong Get 1234 occurrences 558 occurrences 406 occurrences Put 1392 occurrences 976 occurrences 422 occurrences POC 1109 occurrences 828 occurrences 391 occurrences Unsol 884 occurrences 696 occurrences 359 occurrences Refer to the Aspen Cim-IO User's Guide for additional details on transfer record configuration. KeyWords Unsolicited IO_#TAGS occurrence Keywords: None References: None
Problem Statement: How do I troubleshoot a DB Write Error for a DMCplus controller collection in Aspen Watch?
Solution: A DB Write Error run status means Aspen Watch had a problem to write to the InfoPlus.21 database. This is encountered when updating the controller records in the database, i.e. when a new controller configuration file is loaded. Most of the cases, the issue is because the controller structure does not match with the database structure. DB Write Error (Gen) indicates an error while updating the General information of the controller DB Write Error (Ind) indicates an error while updating the Independent variables information DB Write Error (Dep) indicates an error while updating the Independent variables information As a first troubleshooting step, ensure the ccf and mdl files in both Online and Aspen Watch servers match. Sometimes the controller file in the DMC server is different than the files in the AW server. Manually copy the ccf and mdl files from the Online server to the AW server and then, follow the Update procedure. If this does not work, it will be necessary to dig in the InfoPlus.21 database. The following steps require certain proficiency in database usage and they need to be executed with care. Deleting or modifying other records may cause unexpected behavior in the program. It is advisable to create a copy of the current database snapshot before proceeding. Should you require assistance, contact Technical Support. Open Aspen Watch Maker and locate the controller with this run status. Identify the AW collection task associated (TSK_AWxx). Identify also the ID for the controller. ID is the internal name of the controller in the InfoPlus.21 database. Open InfoPlus.21 Manager and look for the collection task. Open the TSK_AW0x OUT file. This file will provide more information about the writing problems. This is a typical message logged by Aspen Watch when encountering these issues: Write to DB (IND Group 3) unsucessful. recid:5447, numok:28, err:-15 In order to determine the record name associated with Recid 5447, open SQLplus and execute the following query: select name from all_records where recid=5447 This query will provide the record name for record id 5447 If you want to get the definition along with the name, execute the following query: select name, definition from all_records where recid=5447 The writing error nature will give more information where the issue is. Consider the example of a DB Write Error (Ind) for a controller. Internal ID for this controller is 12. Open InfoPlus.21 Administrator. Expand the host name node and expand Definition Records. Look for AW_FolderDef. This folder maintains all the controller information structure in the database. Since the writing issue indicates there is a problem with the Independent variables, go to the C12_INDS foler. This folder maintains all the controller information structure in the database. Since the writing issue indicates there is a problem with the Independent variables, look for C12_INDS. Typically, these folders will have empty repeat area fields. This is what causes the mismatch between the database and the controller information. The way to fix the problem is to fill in these missing repeat area fields manually with the appropriate name records before trying the update procedure again in Aspen Watch Maker. Use the CCF to determine the order of variables. Fill in the record names for the missing fields using this syntax: CxxY_variable name. The record names always begin with the letter C, then the Controller ID plus a single letter record type ID (“I” in this case for “Independent”). AW_FolderDef Cxx_INDS Cxx_DEPS Cxx_SUBS Repeat area field syntax CxxI_variablename CxxD_variablename CxxS_subcontrollername Once all the fields are filled in (if a record does not exist, then just leave the field blank, and AW will attempt to create the missing record and fill it in), try the Update procedure again from AW Maker. If the run status was DB Write Error (Dep), the same procedure can be followed. The difference is the Folder and the syntax used for filling in the variable records. Keywords: DB Write Error, AW, Aspen Watch, DMCplus, AW collection References: None
Problem Statement: If you would like to plot or include on an History table some variables and points with 1 (s) step for 86,400 (s) for example, it works fine in Plot/History-table but only to ~8000 (s) afterwards data are available but Aspen Plus Dynamics automatically switches to 2 (s) and then 3 (s) step in the presented tables, only each second or third time point gets reported in the form (reporting interval in all cases 1 (s)). In that case, is there any limitation on the number of points in these tables and plots?
Solution: This behaviour is as intended. The tables have a limit on the numbers of rows which is separate from the history limits. The file attached contains a script to interpolate all the desired times form the variable’s history object. The user should only modify values in the recordHistory script's "Inputs" section. Keywords: Plot, History Table, recordHistory, Interpolation. References: None
Problem Statement: A read transfer record is one defined by IOGETDEF, IOLONGTAGGETDEF, IOLLTAGGETDEF, IOGETHISTDEF, IOUNSOLDEF, IOLONGTAGUNSDEF, or IOLLTAGUNSDEF. Sometimes, users will enter the same address into the multiple IO_TAGNAME fields in transfer records, causing the address to be scanned multiple times from the process. Attached to this knowledge base article is a query that searches through all read transfer records looking for duplicate IO_TAGNAME entries.
Solution: Download the file FindDuplicateAddressesBeingScanned.txt to your InfoPlus.21 server and open it in the SQLPlus query writer. The query loops through all the read transfer records having IO_RECORD_PROCESSING turned ON, and for each occurrence having IO_DATA_PROCESSING turned ON, copies the name of the transfer record, the occurrence number, and the fields IO_TagName and IO_Value_Record&Fld to a temporary table named IOTagNameInfo. Finally, the query selects all rows from the table having duplicate IO_TagName entries with the names of the read transfer records in which duplicate entries appears with their occurrences. Keywords: IOGETDEF IOLONGTAGGETDEF IOLLTAGGETDEF IOGETHISTDEF IOUNSOLDEF IOLONGTAGUNSDEF, IOLLTAGUNSDEF IO_TAGNAME Duplicate References: None
Problem Statement: What is the procedure to create an ABE workspace and register it in SPF (SmartPlant Foundation)?
Solution: Attached there is a video which shows the step-by-step procedure of how to create an ABE workspace and register it in SPF (SmartPlant Foundation). Keywords: SPF, SmartPlant Foundation, Workspace, Register. References: None
Problem Statement: What is the procedure to find documents to publish from ABE to SPF?
Solution: Attached there is a video which shows the step-by-step procedure of how to find documents to publish from ABE to SPF. Keywords: SPF, SmartPlant Foundation, Find Documents To Publish, Publish. References: None
Problem Statement: The value of Move resolution (MOVRES) parameter can be changed in the Controller Configuration File (CCF) for ACO controllers using Aspen DMC plus Build or In the Simulation Section in DMC3 Builder for an RTE controller. What is MOVRES and how to change/edit MOVRES online, through Production Control Web Server (PCWS)?
Solution: What is Move resolution (MOVRES)? Move Resolution is defined as the smallest change (in engineering units) the controller is supposed to make in the manipulated variable set point, in one cycle, if the controller decides to move this variable. This parameter is only applicable to Manipulated Variables (MV). Calculated moves smaller than MOVRES are not implemented but are accumulated in Move Accumulation (MOVACC) instead. Significance of MOVRES is that steady state solver will not produce a solution that is less than MOVRES away from the current value. If a non-zero change in target is calculated, then moves will be larger than MOVRES. The MOVACC term adds accumulated move cycle by cycle, it can be a positive or negative delta value at each cycle. The accumulated value as displayed does NOT show the last value when it reaches the MOVRES threshold. There is also a mechanism in the engine to accelerate the dynamic move: the engine examines the move plan from the current cycle to 1/3 TTSS into the future cycles. If the delta change during this period exceeds MOVRES, the engine will implement a move at the current cycle. This is designed to improve the disturbance rejection. More information about Move Resolution can be found in the DMCPlus or DMC3 Builder entry Dictionaries. How to change MOVRES in the Prodcution Control Web Server (PCWS)? In PCWS, by default the MOVRES parameter is not displayed but, to get this parameter (or any other parameter) displayed online, the column data set for the variables can be modified through the Configuration page of PCWS. The procedure is slightly different for ACO and RTE applications. For ACO applications: Go to PCWS – Configuration and on the left-hand side menu select APC ACO (under column sets) In APC ACO Column set to edit, select the corresponding column set where the variable should be displayed. Look for the MOVRES parameter in the Available list and add it to the Selected List using the arrow button. For RTE applications: Go to PCWS – Configuration and on the left-hand side menu select APC RTE (under column sets) In APC RTE Column set to edit, select the corresponding column set where the variable should be displayed. Look for the “MoveResolution” parameter in the Available list and add it to the Selected List using the arrow button. After these changes are applied the Move resolution parameter will be available for the controllers. Keywords: MOVRES Move Resolution MOVACC Move Accumulation Online PCWS References: None
Problem Statement: This knowledge base article explains how to start Aspen InfoPlus.21 using an older snapshot.
Solution: Aspen InfoPlus.21 maintains a list of snapshots saved by TSK_SAVE locally to the Aspen InfoPlus.21 server. You can see this list by opening the Aspen InfoPlus.21 Manager, selecting TSK_DBCLOCK in the defined task list, and clicking the Snapshots button that appears to the right of the Aspen InfoPlus.21 Manager. The snapshots are listed from most recent to oldest. By default, Aspen InfoPlus.21 uses the snapshots in this list if there is a problem loading the snapshot contained in the Command line parameters field of TSK_DBCLOCK. To use an older snapshot, uncheck the box Loading Order By Time. This enables the buttons Move Up and Move Down at the top of the screen. You can now select an older snapshot and move it to the top of the list. After moving the older snapshot to the top of the list, press OK to close the screen. Next, use Windows Explorer to rename the snapshot normally loaded by TSK_DBCLOCK. By default, this is C:\ProgramData\AspenTech\InfoPlus.21\db21\group200\InfoPlus21.snp; however, this may vary from location to location. Please check the Command line parameters box of TSK_DBCLOCK for the location of your default snapshot. After renaming the default snapshot using Windows Explorer, start Aspen InfoPlus.21. TSK_DBCLOCK will see the default snapshot is missing and will try to load the snapshot you moved to the top of the snapshots list. Note: After starting Aspen InfoPlus.21, you must open the Aspen InfoPlus.21 Manager, select TSK_DBCLOCK in the defined task list, click on the Snapshots button, and recheck the box Loading Order by Time. Keywords: References: None
Problem Statement: You may see the error "Failed to create the AFW object in IP21ProfileGetAFWAllRoles function" when opening the Aspen InfoPlus.21 Manager or other Aspen InfoPlus.21 programs.
Solution: First try opening a command window as an administrator on the Aspen InfoPlus.21 server and entering the command IISRESET. Then open Windows Services and restart the Aspen InfoPlus.21 Task Service. If the problem persists, then there may be a bitness mismatch between the AFW Security Client Service and the installed version of Aspen InfoPlus.21. For example, the 64-bit version of the AFW Security Client Service may be running on a server having a 32-bit Aspen InfoPlus.21 server installed or vice versa. To check the version of the AFW Security Client Service being used, open Windows Services, right click on the AFW Security Client Service, select properties, and look for the field "Path to executable:" The 64-bit version of AfwSecCliSvc.exe is started from C:\Program Files\AspenTech\BPE while the 32-bit version of AfwSecCliSvc.exe is located in C:\Program Files (x86)\AspenTech\BPE. To change the path to the executable, open regedit and find the key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AfwSecCliSvc\ImagePath Correct the ImagePath, perform an IISRESET, and restart the Aspen InfoPlus.21 Task Service. Keywords: References: None
Problem Statement: How can I move all Aspen Calc calculations from one schedule group to another?
Solution: Before using the query attached to this solution, backup the Aspen Calc schedule groups as follows: Stop the AspenTech Calculator Engine service. Backup the file C:\ProgramData\AspenTech\Aspen Calc\Bin\Schedules.atc. Restart the AspenTech Calculator Engine service. Attached to this knowledge base article is a query named MoveCalculationsBetweenScheduledGroups. Download the file into the Aspen SQLplus Query Writer. The query prompts for the source and target Aspen Calc schedule groups, and then asks for confirmation that you want to move all the Aspen Calc calculations from the source schedule group to the target schedule group. After receiving confirmation, the query inserts each calculation in the source schedule group into the target schedule group and then removes the calculation from the source schedule group. Keywords: Aspen Calc schedule group FifteenSec FiveSec OneDay OneHour OneMinute References: None
Problem Statement: How to use COA exception with Equipment type. For instance, how can I separate the cost of Mag drive pump from other centrifugal pump in the report.
Solution: You can create new COAs and allocate them using exceptions based on equipment symbol and equipment type. Find below an example of procedure. ? Create a project ? In the Project Basis view, select "Default" as Code of Accounts ? Edit the "Default" COA file and select Definitions ? Click on ADD and create a COA 1611 with EQ as COA group and "Magnetic Drive Pump" as COA name. Click OK ? Select Allocations and add a line to allocate the COA 161 (default for Centrifugal Pump) to the new COA 1611, set the COA exception flag to E, in the equipment symbol column to CP and in the Equipment item type specify MAG DRIVE ? Evaluate In the reports, the magnetic drive pump will be allocated to COA 1611. You can then index this COA separately from other centrifugal pumps. The equipment symbol to enter on the COA exception allocation is the first part of the model name (e.g. CP, VT, TW, DDT, HT etc) and the second part should be entered in the equipment item type (e.g. CENTRIF, API 610 etc). Keywords: COA Exception References: None
Problem Statement: In complex Aspen Production Execution Manager (APEM) environments performance may not be currently optimized. For scenarios where data entry and overall responsiveness is perceived slow, it is recommended to apply the following settings outlined in the solution.
Solution: The optimal performance can be achieved by setting a longer refresh rate. The recommended settings should be checked and applied to the configuration and thread property for the following keys: WORKSTATION_REFRESH_PERIOD = 60 (seconds) TRACKING_REFRESH_PERIOD = 60 (seconds) If the Basic Phase has threads that contain time consuming scripts such as database access, it is recommended that the time interval be set to 10 seconds or longer. * NOTE: It is always a good idea to make this longer than 1 second when the thread code needs to access a database. KeyWords Delay Refresh Periods Ideal settings Improve speed Keywords: None References: None
Problem Statement: Writing lab time stamps from an APC application to a time stamp variable on the DCS is possible thru the configuration of a TIME write using CIMIO. However, sometimes the written timestamp is not correct as it has a delay with respect to the actual value.
Solution: On the CIMIO Interface Manager (V9.0 and above) there is an option to set the time zone for timestamp reads and writes for the CIMIO for OPC interface. If CIMIO Interface Manager is not available, another option to adjust timestamp reads/writes only for IQ and ACO-based DMCplus applications (non-RTE), is to define the following environment variable on the Online server to set how many minutes to adjust for the time zone: System environment variable name: CIMIO_CIO_DeviceName_ADJTIME Where “DeviceName” is the name of the IO device used. To set this environment variable go to Control Panel – System – Advanced System Settings and in the System Properties windows click on Environment Variables to add the new variable. For the environment variable change to take effect, the following steps should be followed: Restart the ACO Utility Server. Please note that ALL DMCplus and IQ applications will be stopped when you restart the service (You may have to schedule this with operations). Restart the CIMIO for OPC connection. Keywords: CIMIO for OPC Inferential Qualities Time stamp References: None
Problem Statement: The legacy db tables (Tables that start with PM, queries that start with R_) are not generated in AO from V9 and in DR in V10 anymore. So if you would like to extract information in those legacy tables, you need to use SQL queries. Doesn't matter if you are in Access or SQL server db, the following query will extract info the same as table PMQUALITY.
Solution: Run this directly in SSMS script if your model is SQL server db, and in SQL view if your model is in Microsoft Access. SELECT RW_StreamQualities.SolutionID, RW_StreamQualities.CaseID, RW_StreamQualities.PeriodID, RW_StreamQualities.StreamTag, RW_StreamQualities.QualityTag, RW_StreamQualities.Value, RW_BlendQualities.MinValue, RW_BlendQualities.MaxValue, RW_BlendQualities.MarginalValue FROM RW_StreamQualities LEFT JOIN RW_BlendQualities ON RW_StreamQualities.StreamID = RW_BlendQualities.BlendID AND RW_StreamQualities.QualityID = RW_BlendQualities.QualityID AND RW_StreamQualities.SolutionID = RW_BlendQualities.SolutionID AND RW_StreamQualities.CaseID = RW_BlendQualities.CaseID and RW_StreamQualities.NodeID = RW_BlendQualities.NodeID and RW_StreamQualities.PeriodID = RW_BlendQualities.PeriodID ORDER BY RW_StreamQualities.SolutionID, RW_StreamQualities.CaseID, RW_StreamQualities.StreamTag UNION SELECT RW_BlendQualities.SolutionID, RW_BlendQualities.CaseID, RW_BlendQualities.PeriodID, RW_BlendQualities.BlendTag, RW_BlendQualities.QualityTag, RW_BlendQualities.Value, RW_BlendQualities.MinValue, RW_BlendQualities.MaxValue, RW_BlendQualities.MarginalValue FROM RW_BlendQualities LEFT JOIN RW_StreamQualities ON RW_StreamQualities.StreamID = RW_BlendQualities.BlendID AND RW_StreamQualities.QualityID = RW_BlendQualities.QualityID AND RW_StreamQualities.SolutionID = RW_BlendQualities.SolutionID AND RW_StreamQualities.CaseID = RW_BlendQualities.CaseID and RW_StreamQualities.NodeID = RW_BlendQualities.NodeID and RW_StreamQualities.PeriodID = RW_BlendQualities.PeriodID WHERE RW_BlendQualities.BlendTag NOT IN (RW_StreamQualities.StreamTag) AND RW_BlendQualities.QualityTag NOT IN (RW_StreamQualities.QualityTag) ORDER BY RW_StreamQualities.SolutionID, RW_StreamQualities.CaseID, RW_StreamQualities.StreamTag You can alias the column names to the same as PMQUALITY by putting "AS Column_name" after each selected column. This should give you a list of streams, their properties, and min/max spec if they are blended. You can also develop other queries that extract info from other legacy tables following the same query pattern. You can alias each table (AS a, LEFT JOIN xxx AS b) to shorten the SQL query. But if you are extracting data from Access db, this is not supported. Keywords: None References: None
Problem Statement: When updating a database why do I observe error message Failed to execute command [ALTER TABLE PLINV_COMP ADD CONSTRAINT FK_PLINV_COMP_PIPELINE_INV_PLINV_XSEQ FOREIGN KEY (PLINV_XSEQ) REFERENCES PIPELINE_INV (X_SEQ):)
Solution: This KB Article explains what could be the root cause of error message Failed to execute command [ALTER TABLE PLINV_COMP ADD CONSTRAINT FK_PLINV_COMP_PIPELINE_INV_PLINV_XSEQ FOREIGN KEY (PLINV_XSEQ) REFERENCES PIPELINE_INV (X_SEQ):) that can present when updating an APS database and how to troubleshoot it. This error can be observed when there are missing records in table PIPELINE_INV, the number of records should be matching records in tables PLINV_COMP and PLINV_DEST. This can be produced by an incomplete unarchive process, remember there is a 2 GB access database limit. If this is the case, check the SQL Database, if there are no missing records when running the update script to the SQL database this problem will not present. Another possible root cause can be database corruption due to a not proper cleanup process or due to bad practices of altering database records manually. If this is the case contact support for further assistance. Note that it is not recommended to manually modify or alter database records. Keywords: None References: None
Problem Statement: In MBO, is it a good practice to blend RBOB and CARB components in the same tank?
Solution: This KB Article explains that it is not a good practice to blend RBOB and CARB components in the same tank and what can be the possible outcomes of this process. The current assumption behind MBO design is that you will blend RBOB and CARB into different tanks. You don't have to necessarily do this (blending into different tanks) but the optimizer works much better that way. You can still get lucky if you blend RBOB and CARB into the same tank (like with the 5.7% blend) but the algorithm is more susceptible to failing. The reason is that we define which property balance to include in the matrix per tank, not per blend. These are the PBALS and if any blend in the tank requires that property balances, then all blends in the tank will include that equation. If you require to have RBOB and CARB components blended in the same tank one thing you could do is to deselect the option Regulatory Prop Constraints. This would allow the blends to violate CARB and RFG limits. Note that unchecking this option may lead to violate the law and cause solution instability, though if the model specifications are properly defined this should not have any impact since MBO will honor all the user defined specifications. Keywords: None References: None
Problem Statement: Why does the temperature of the outlet stream of a pump become higher than the inlet temperature, even if pump efficiency is 100%?
Solution: Let's first look at the energy balance of a system in general: enthalpy out = enthalpy in + work + heat. In the case of the pump, the heat input is not present, so the energy balance of the pump is: h_out = h_in + PBrake / flow where h_out = molar enthalpy of the outlet stream, h_in = molar enthalpy of the inlet stream, PBrake = brake power, flow = molar flow. The brake power is the power delivered to the fluid. One can work out the "ideal" power to increase the pressure of incompressible liquid, PFluid = volume_flow * pressure_increase. This is the power required to increase the pressure of the fluid. Fluid inefficiency is accounted for with the introduction of efficiency: PBrake = PFluid / fluid_efficiency. The pump model also provides a "motor efficiency", Pmotor = PBrake / motor_efficiency. The difference between Pmotor and PBrake is assumed to be lost to ambient, and not given to the fluid. The pump model uses the calculated outlet enthalpy and the specified outlet pressure to work out the outlet temperature. Therefore it is expected that the outlet temperature will be higher than the inlet temperature. It is also expected that when fluid_efficiency is less than 100%, the increase of temperature will be more significant, since the difference between PBrake and PFluid is converted to heat. The increase of temperature when pump efficiency is 100% is puzzling. As explained above, the outlet enthalpy is increased due to power input. The intuition is that most of that power should be used to increase the pressure, therefore when efficiency is 100% the temperature increase should be minimal. When using IDEAL property method with water at 25C, 1 bar, with a Pump block with 100% efficiency and outlet pressure = 100 bar, the outlet temperature is 27C. When using a steam table model, such as IF97, the outlet temperature is only 25.2C. The problem is due to the fact property methods such as IDEAL use a non-compressible liquid density model (Racket liquid mixture model), and as a result the liquid mixture enthalpy is also completely independent of the pressure. This implies that the increase of enthalpy (due to PBrake) is entirely converted into temperature increase. On the other hand, property methods such as IF97 account for the liquid compressibility, and therefore some of the power input is indeed converted to pressure increase in the enthalpy calculation. You should review the details of the VLMX and HLMX routes to work out the details of the liquid mixture molar volume and liquid mixture enthalpy calculations. The increase of temperature with incompressible liquid density is small enough that it can be ignored. Some users add a Heater block to reset the temperature. Keywords: pump, Maxwell equations, incompressible liquid References: None
Problem Statement: How do you automate Aspen Plus tasks via Python?
Solution: The Aspen Plus Windows user interface is an ActiveX Automation Server. The ActiveX technology (also called OLE Automation) enables an external Windows application to interact with Aspen Plus through a programming interface using a language such as Microsoft's Visual Basic. The server exposes objects through the COM object model. Information about the Automation Interface is in the Help -> Simulation and Analysis Tools -> Custom Models -> Using Aspen Plus Via Automation. Unlike languages such as VBA, VB.NET, and C#, Python does not automatically create a wrapper to interact with COM interfaces. You can download and install a third-party extension known as "pywin32" to act as the additional layer to facilitate communication between Python code and COM interfaces. Note that Aspen Tech does not endorse the use of third-party openly-sourced modules. There is an article on web that discusses about this http://kitchingroup.cheme.cmu.edu/blog/2013/06/14/Running-Aspen-via-Python/ Please note that Aspen Tech has not reviewed the content in the above link. Keywords: Python, Automation, ActiveX References: None