question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: How and when to use Keyword #PL_DST?
Solution: Use the Keyword #PL_DST - Period Length Using Daylight Savings Time to indicate that period lengths are to be calculated using daylight savings time (DST). Application To define period lengths as DST values: Enter the keyword #PL_DST in cell A7. This keyword must be in cell A7. Keywords: None References: the system generated value found in B7 in the column B cell associated with Period Length. This causes Period Length to be calculated using DST. The B7 value is system calculated. Example Users should use this keyword only if they want accurate period length for the days when DST transitions happens e.g. the day when there are only 23 hours in a day and another when there are 25 hours in a day. The keyword is there for informational purposes. The current formula “Period Length” (cell B5 of PREP sheet) as =B4-B3 is not accounting for DST adjustments. To make the “Period length” be DST compliant, one has to do the following: Add a new keyword “#PL_DST”, at Row=7 and Col=1 (see the screenshot below), a DST compliant Period Length will be populated at Cell(7,2) during ‘Simulation’/’Publish All’ operation if the keyword “#PL_DST” exists in the PREP sheet Change the value for Period Length (B5) as shown in the screen shot below. This will allow users to use the correct “Period Length” in their Excel calculations.
Problem Statement: How does Refresh functionality works in APS? Does it notify other users of changes made directly into the database?
Solution: This KB Article explains how “Refresh” functionality works in APS and why APS will not notify other users of changes made directly into the database. In cases where multiple users are modifying the same schedule, it is important that all users are notified when an update is made to the current schedule. If events are modified from UI or automation the system uses the SCHEDULE_LOG table to monitor updates that are made to the schedule. The system automatically queries the SCHEDULE_LOG table on a periodic basis. If the system finds that changes have been made to the schedule since the user last loaded the schedule, the button is enabled. You can then click this button on the Events toolbar to display the Schedule Change Log dialog box, which displays the list of changes that have occurred since the schedule was originally obtained. You can then choose to refresh the schedule if necessary before making any additional modifications. Note that if there are changes made directly into the database the described process will not happen and other users will not be notified of such changes. Keywords: None References: None
Problem Statement: What is the best model to evaluate the viscosity of aqueous mixture such as ethanol-water, dioxane-water, etc?
Solution: No systematic investigation has been made so unfortunately you will have to do this validation yourself. We can however provide some suggestions. The viscosity of a liquid mixture is calculated from the viscosity of the pure component and some mixing rule. Therefore one should first make sure the pure component viscosity is correct. You can access experimental data for pure components and binary mixtures using the NIST button. This will give you quick access to experimental data and reference. You may want to review the reference papers if you have access to those. It is important to note that the mixture models are not predictive. For accurate properties, you must regress the binary interaction parameters. Andrade model may be suitable, but the MUASPEN model may provide more flexibility in fitting the data. Keywords: None References: None
Problem Statement: How does Property Balance work in MBO?
Solution: Property balances for finished product are always performed. In periods where the property balances are turned off, the properties are fixed based on what the simulator calculates for each property before the optimization. In other words, for each period, the optimizer uses what appears on the componentproperty trend chart for that property. (Optionally) The duration for which the property balances for a specific component are performed For linear properties, the property balances for a property of a given component tank are calculated as follows: Where: I1 = Component inventory at period end P1 = Component property at period end I0 = Component inventory at period start P0 = Component property at period start QR0 = Component rundown quantity PR0 = Component rundown property QO1 = Component runout quantity Note that property balances are relevant when component properties could change as a result of the optimization. For example, if there is a transfer or a receipt going into the component tank, and this event is being optimized, then property balances are relevant. Property balances are not relevant when nothing in the scope of the optimization will change the component properties from their simulated values. If a component has a rundown to the component tank that has different qualities than what is currently at the tank at the same time that the component is being used in a blend, then property balances are relevant and are properly taken in account because the amount of the component used for blending is being optimized, and because this amount will impact the resulting properties of the tank when mixed with the incoming rundown. Keywords: None References: None
Problem Statement: Sometimes it is desirable to allocate an array with its size same as the number of streams connected to a multi port. For example, the incoming streams have mass flow rate field, but not mole flow. You like to calculate and store the mole flow associated with the incoming streams.
Solution: For this specific problem you could extend the stream definition to include the extra variables. The multiport object property Connectionset is a string set with the list of names of streams currently connected to the multiport. It can be used to declare arrays. In the model, to declare an array with the same size as the number of streams connected to the multiport you will use the following statements: feed as input multiport of MaterialPort; Fm(feed.connectionSet) as flow_mass; MW(feed.connectionSet) as molweight; for i in feed.connectionset do call(MW(i)) = pMolweight(feed.connection(i).z); Fm(i) = MW(i) * feed.connection(i).F; endfor This model evaluates the molecular mass (MW) for each streams connected to the feed input port, then evaluates the mass flowrate. Keywords: Multiport, array, automatically, size, length, initialize, port References: None
Problem Statement: Aspen SQLplus automated reports defined by SQLReportDef may display the error "ADSA SMTP server not configured" in the STATUS field.
Solution: This error means that the external task TSK_SQLR successfully created the report but could not e-mail it using SMTP. This article has some suggestions for troubleshooting this problem. 1. Verify the SMTP server is accessible from the Aspen InfoPlus.21 server. Attached to this article is a query named TestSMTPServer. This query prompts for the SMTP e-mail server name, a sender's e-mail address, and a recipient's e-mail address. If the recipient does not receive the test e-mail, then the SMTP server is not accessible from the Aspen InfoPlus.21 server. Please work with your IT department to resolve this issue. 2. Verify there is an ADSA data source pointing to the Aspen InfoPlus.21 server that is named the same as the server's node name and make sure the "Aspen Simple Mail Transfer Protocol (SMTP) Configuration" service points to the SMTP e-mail server. 3. Verify that the value for field ERROR_TYPE is OK in the SqlReportDef record. If it is something like "ADSA SMTP server not configured" then go back to item #2 above and double-check the ADSA configuration. Keywords: References: None
Problem Statement: How do you generate an XY diagram to compare with published data?
Solution: The method will depend on the precision you require. For a quick check of isobar or isotherm binary vapor-liquid equilibrium data, the interactive property analysis is the easier option. Go to the Properties Environment, then click on the Home tab of the Ribbon, in the Analysis group the button "Binary". You can select the type of diagram, the components. You can also specify multiple temperatures or pressures. When you're done, click the Run Analysis button. This will generate the plot. You can also copy the results from the "Results" tab of the analysis if you want to process the results in another program such as Excel. For systems with liquid-liquid immiscibility, you may want to use the Mixture analysis. With the binary analysis, only a few points are generated. See for example for water-butanol: The calculation with the Mixture analysis requires a few more actions. The mixture button is just under the Binary analysis button on the Analysis group in the Home tab of the Ribbon. You can select the 2 components, then enter some flow rate. You can then vary the mole or mass fraction of one component from 0 to 1, and have the dew point and bubble point to be reported (property set TBUB and TDEW for TXY diagram, or PBUB and PDEW for PXY diagram). The specified temperature does not matter but you have to enter some value. Very important: on the Calculation Options, you must change the valid phases to "Vapor-Liquid-Liquid". Property set properties: We get the following plot (it is not very different from the one obtained with the simple TXY. In the case of very immiscible systems, the quick TXY will show straight lines for the dew point lines, while the more elaborate mixture analysis will show the curved shape). You can use the tools on the Format tab to format the plot (e.g. use a single y axis for both dew and bubble temperatures, and use lines only for better readability). If you import the data in Aspen Plus (in the Data folder), for example using the NIST database tool, you could also switch to the regression mode and create a regression case in "Evaluation" mode. To do this: - click NIST button on Home tab - select Binary data - select WATER and METHANOL components - click retrieve data - in the list of data, we have picked the first isobaric data set VLE005 - click Save Data - on the Home tab, click the Regression mode - in the Regression folder, click New to create a new regression case - select the data set and click the Evaluation button - click run - Go to the regression case, click TXY plot button Keywords: None References: None
Problem Statement: How to to create a blend assay with customized criteria.
Solution: You could assign different assays defining a crude tank as crude derivative tank. In a crude derivative tank you can change his information using the units workbook. I have attached a demo model in which TA (crude KIR) and TB (crude MIN) are making a transfer to a TDUM (Dummy Derivative Tank) In the POST sheet section I am assigning to the tank a different assay in the #RESULT section depending on a specific logic (BlendAssay parameter unit and If logic) Then all the content of TDUMMY tank (with the assigned blended assay) is transferred to TC this tanks C will have the differents compositions of the blended assays, if you just want to assign blended assays and dont track different compositions in the tank then just TDUMMY would be needed and this tank would charge to the crude unit, please see the attached demo model and let me know if you have any questions. 1. Create Tank A, B and C as Crude Tanks. 2. Assign them inventory with EIU or directly in Database in table CRDINV. 3. Create Dummy Derivative Tank. 4. In table INIT call CRUDES as follows: 5. In Refinery Overview create Unit BLENDASSAY with description Blend Assay To Use, Type Parameter Continuous Operation, Unit Parameters 1 %A, 2 %B as follows: 6. Create Event Screen Blend Assay. 7. Add Control Variables TA, TB, TDUM, TC, BLENDASSAY, CR1. 8. In Event Screen Blend Assay add Trends as follows: 9. Create Crude Transfer Dummy Transfer were Tank A and Tank B are send to TDUM as follows: 10. Modify table POST as follows: a. Call CRUDES information from table PREP. b. In #INPUT add BLENDASSAY. c. In #RESULTS add TDUM as follows, with the criteria to have value of 1 if ARH crude is being used and 0 if is not being used, for the rest of the crudes select the value of 0. d. Add Percentage of crudes and criteria you select for example using ARH crude if KUW crude is under 50. 11. In Blend Assay Event Screen add Crude Transfer TC-TDUM event were Crude transfer is being transfer to Tank Crude C. 12. Copy Dummy Transfer and TC-TDUM events and add Crude Run as follows: You are going to have two tanks being blend and then being transferred to a Tank with a different blend assay. Keywords: None References: None
Problem Statement: What are the expected values and use of UseIndication in Property XML tag section?
Solution: This Tech Tip gives a brief explanation of what UseIndication is used for and what are the expected values of it. UseIndication value is only applicable for Honeywell BPC XML output file, is found under <Property> XML tag section and is used to indicate if the associated property specification should be used by BPC. The values UseIndication expected values are Boolean, 1, 0 or blank. Keywords: None References: None
Problem Statement: Is it possible to add 3rd party views or to insert data into PIMS Results Database?
Solution: To add 3rd party views or to install data into PIMS Results Database is not supported or recommended. No modification to the PIMS Results database is supported. Customer should only use Report Writer views to get data from the database and should not insert data in the PIMS Results database tables. Keywords: None References: None
Problem Statement: PIMS does not display solution status and matrix is not generated, why is this behavior being observed?
Solution: This KB Article explains why PIMS does not display solution status and matrix is not generated. This can be observed and would be expected if the option Solve Matrix of the Standard Model Execution dialog box is not checked as shown I the following image: If only the option Generate Matrix is select, then PIMS will generate the problem matrix. In order to solve the generated matrix the option Solve Matrix should be select. To have the matrix generated and resolved both options Generate and Solve Matrix should be checked. Keywords: None References: None
Problem Statement: When can ASELxxx Columns and Rows be observed in the Matrix and what do they represent?
Solution: ASELxxx Columns will be present in PIMS Matrix to indicate Material Sale by Group where xxx is the group tag. ASELxxx Rows will be present to indicate material or product constraints by group where xxx is the group tag. We can observe the following example using Volume Sample model, when we use group GSO as indicated below: We will obtain the following structure in the matrix: Keywords: None References: None
Problem Statement: Instruction on how to plot 3D Plot in ACM.
Solution: For 3D Plots, Plotting variable must have 2-dimensional coordinate. Below example code illustrate how you can set 2-dimensional coordinate for variable 'Z'. Model Sphere N as integerparameter(100); X([0:N]) as realvariable(fixed); Y([0:N]) as realvariable(fixed); Z([0:N],[0:N]) as realvariable; //Prepare 2-dimensional coordinate for Z for i in [0:N] do X(i):i*2/N; Y(i):i*2/N; endfor for i in [0:N] do For j in [0:N] do if (x(i)-1)^2+(y(j)-1)^2<1 then Z(i,j)^2=1-(X(i)-1)^2-(y(j)-1)^2; else Z(i,j)=0; endif endfor endfor End After user set up the model, please follow the below instruction. - Change the plot type to 3D Plot - Change the dimension of plotting variable as 2D - Put plotting variable, set up X axis and Y axis KeyWords Aspen Customer Modeler, Profile Plot, 3D Keywords: None References: None
Problem Statement: How to use Parametric Analysis with Process Limits Rows
Solution: For this example, we will use the Sample Model Gulf Coast located in (C:\Users\Public\Documents\AspenTech\Aspen PIMS\Pims\Gulf Coast). Remember that in Table Display: The row names can be any matrix variable or rows ZLIMRTT This table is shared by all the Solution Analysis tools For Table PARAOBJ: ROWNAMES are valid LP column/row names Columns Initial/Delta/Final indicate the Cost/Price/limits defined by TYPE FINAL > INITIAL + DELTA * n (where n is positive integer) Can provide STEPS (n) or DELTA Use GROUP to indicate variables that should be varied at the same time Results when using ZLIMRTT in PIMS Results in Excel Parametrics File that can be located in the Model Folder Keywords: None References: None
Problem Statement: I am using ABML Method D86 PERCENT OFF in PIMS, can this method be implemented in MBO or APS?
Solution: D86 PERCENT OFF method can be used to predict the D86 values. This method takes the component D86 seven-point distillation curve and converts them to true boiling point temperature (TBP). In APS and MBO one option to consider is to implement a UBML Method. The distillation correlations are the same between PIMS and MBO since both share the core transformations (temperature to percent off and back). One important difference (that simplifies the workflow in MBO) is that MBO can directly use temperatures for blending without requiring the user to define percent off points and the related forward and reverse calculations (all this is done automatically in MBO and this is why in the MBO the ND86TOPERCENTOFF is not exposed). This approach allows the use of the same distillation correlations with PIMS without the need to resort to modeling percent off temperature points. For MBO you can consider ABML_HC94, use this method to predict the D86 Initial Boiling Point, T30, T50, T70, T90 and Final Boiling Point of motor gasoline (MoGas) and middle distillate blends. Note the equations in the reference literature expect a seven-point distillation curve. The implementation for Petroleum Scheduler and Multi-Blend Optimizer uses either a seven-point or a five-point curve with the T30 and T70 points linearly interpolated from T10, T50 and T50, T90 temperatures. Keywords: None References: None
Problem Statement: Firewall Configuration recommendations and consideration for enabling Aspen Watch History for PCWS
Solution: If we look at solution https://esupport.aspentech.com/S_Article?ID=123707, it explains the general firewall port configuration for APC applications. Please review that KB before performing below steps. In V10, the ProcessDataREST service is changed to use different external task TSK_ORIG_SERVER to communicate instead of TSK_DETAULT_SERVER. The correct configuration for 'Aspen Watch history access for PCWS and Aspen PID Watch' in V10 is as below: ON THE WATCH SERVER: Modify: TSK_ORIG_SERVER command line and change to “-v1 -n10016” (without the quotes) We are now telling TSK_ORIG_SERVER to be the process that listens on port 10016 instead of TSK_DEFAULT_SERVER. Restart this task. ON THE WEB SERVER: Edit infoplus21_api.config file (on PCWS server) and change it to send port 10016 requests to the v1 external task (TSK_ORIG_SERVER). <?xml version='1.0'?> <INFOPLUS21_API_CLIENT RPCTimeout="600"> <Servers> <Server HostName="AspenWatchHostName"> <APIServer Version="1" Port="10016"/> </Server> </Servers> </INFOPLUS21_API_CLIENT> You will need to perform a IISRESET on web server to have this change take effect. Now try the 'History plot' of any MV or CV, if the page remain blank or any error messages, please contact Aspen Support. Keywords: PCWS History Plot Aspen Watch Firewall Ports References: None
Problem Statement: When creating a DSN File which ODBC Driver should be selected to work with Aspen Petroleum Scheduler and Aspen Refinery Multi-Blend Optimizer and SQL Databases?
Solution: We recommend APS and MBO users to select SQL Server Native Client as an ODBC driver instead of SQL Server. Keywords: None References: None
Problem Statement: See article 127329-2: How to create a back-up of history files. AspenTech (very strongly) recommends using TSK_HBAK for backing up Aspen InfoPlus.21 History Filesets. It also explains the differences between Active, Shifted and Changed Filesets. The question answered in this new article is having used this procedure to backup the filesets, how to restore those backed up filesets in the case of such as a catastrophic disk failure on the production system.
Solution: As is hopefully understood from 127329-2, assuming there has not been a fileset shift, each time the HistoryBackupDef record is activated, the resultant backed up fileset in the Active directory is simply a newer incremented version of the files that were last saved in the Active directory. Therefore, restoring the Active fileset would simply be a case of copying the Arc.Dat, Arc.Byte, and Arc.Key files from the Active backup directory into the correct location. Now let's look at the case of a fileset shift having taken place on the Aspen InfoPlus.21 system, AND the HistoryBackupDef record has been activated after the shift. As we know, because the fileset shifted, the one it shifted out of will have a status that includes the word Shifted. When the Backup record is next activated, this will cause the fileset with a shifted status to be written to the Shifted directory, and the status changed back to mounted. This backup in the Shifted directory now supersedes any copies of that fileset that may be remaining in the Active directory and thus would be the one used for fileset restore. However, suppose that some time after the Shifted backup was made, something changed in that fileset, most typically that would be because somebody had run an Aspen SQLplus query to insert new history or to modify existing history. As we mention in 127329, the status of that fileset now contains the word Changed. Again as we know, that means that activation of the Backup record will cause that fileset to be copied to the Changed directory, and the status changed back to mounted. This backup in the Changed directory now supersedes any copies of that fileset that may be remaining in the Shifted or Active directory and thus would be the one used for fileset restore. NOTE: See article 132774-2: Can/Should the HBAK Location for Shifted and Changed be the same?. You may want to consider making the Shifted Location and the Changed Location to point to the exact same directory. This describes that any 'Changed' Backup will actually overwrite any prior 'Saved' Backup for the exact same fileset. Thus simplifying the above paragraph. Summarizing the above paragraphs, if one or more filesets need to be restored, the database administrator should first look in the Changed directory to see if it contains a copy of that fileset, and use it if it exists. If it does not exist in the Changed directory then look in the Shifted directory. Finally if the fileset is not backed up in the Changed or Shifted directory then the only reason is that it must be in the Active directory and that would be the one to use. If you have any questions or concerns over the above recommendations, please call your local AspenTech Support group and they will be able to help you confirm which files to use. Keywords: History Backups Restore HBAK Tsk_Hbak Active Shifted Changed References: None
Problem Statement: Aspen Cim-IO Store and Forward protects a system against loss of connection between the Aspen Cim-IO client and server by storing the data and forwarding it when the connection is re established. However; if the Aspen Cim-IO server stops functioning data loss will occur. Configuration of redundant systems could prevent data loss by switching to a secondary Aspen Cim-IO server.
Solution: This article outlines how to configure Aspen Cim-IO Redundancy. Introduction: Aspen Cim-IO supports Cim-IO redundancy. You can configure two Cim-IO servers on redundant nodes. If communications with the first one is lost, the Aspen Cim-IO client switches to the secondary node. Aspen Cim-IO Redundancy is fully compatible with Store and Forward functionality; Store and Forward may be used on both primary and secondary nodes. Aspen Cim-IO redundancy requires the following: · InfoPlus.21 Version 2004 (7.0.0) or higher · Cim-IO Kernel Client Side version 2004 (7.0.0) or higher · A Cim-IO device with IO transfer records configured to use Store and Forward · A running redundancy detection task · Running Cim-IO MAIN and ASYNC client tasks with auto-restart disabled · Appropriate changes to cimio_logical_devices.def · Two identical redundant Cim-IO servers connected to the same end device. The Aspen Cim-IO redundancy task actively monitors the following to determine failure: · IO_LAST UPDATE time of Cim-IO records · ICMP (network) Ping to Cim-IO server node · Success or failure of Cim-IO client calls · Quality/status of watchdog tag (optional) Configure the Device: The recommended method for configuration is to use the Aspen InfoPlus.21 Administrator I/O wizard. Additional information outlining Aspen Cim-IO Redundancy and configurations using the Aspen InfoPlus.21 Administrator I/O wizard are found in the Aspen Cim-IO Core User's Guide . Start by R-clicking on the Aspen InfoPlus.21 Administrator I/O wizard and select Add new device. The configuration is the same as any single device with a couple exceptions that I'll point out below. Logical Devices tab: Select "Use as part of a Redundant Configuration" and add the secondary node name. Processes tab: Select alternate locations for store files on both Primary and Secondary nodes if desired. Once completed the Aspen InfoPlus.21 Administrator I/O wizard adds the Cim-IO\Management\*.CSD file on both Aspen Cim-IO server nodes for startup. Aspen InfoPlus.21 Administrator I/O wizard also updates the services files with the appropriate entries. Note: It's always a good idea to verify the port numbers on all nodes match. The cimio_logical_devices.def file on the Aspen InfoPlus.21 server contains entries for the main device and both primary and secondary devices. Configure Cim-IO Variables: Run the CimIOProperties.exe utility on the Aspen InfoPlus.21 system to set the Cim-IO variables. It's also a good idea to run this utility on the Aspen Cim-IO servers also. Below is a description of the variables that are specific to a Cim-IO redundancy configuration. Customers should understand each variable to determine if the settings are consistent with system requirements. The following variables are Cim-IO client setting and should be set on the Aspen InfoPlus.21 server. CIMIOChangeoverStandbyCleanup If the connection between the Aspen InfoPlus.21 Server and the active redundant Cim-IO Server node goes down; This variable regulates the behavior of the Changeover task in Aspen InfoPlus.21 to determine what to do with the store and list files on the offline Cim-IO Server node after switch over. · Use Changeover Cleanup causes the Changeover task to tell the node that becomes secondary to clean up its scan and store lists and to clear its store file. Only the store file from the primary node will be accepted and processed. · Do not use Changeover Cleanup causes both store files to be forwarded and processed. Duplicate and older values will be rejected. CIMIOChangeoverTimeout For every transfer record, the Aspen Cim-IO client side issues a stop get, stop put, or cancel request waiting this amount of time for a response, before closing the TCP connection and establishing a connection to the secondary server. CIMIODualFailureDelay If the network connection between the InfoPlus.21 Server and both redundant Cim-IO Server nodes goes down but the two redundant nodes remain active gathering data and storing it. This variable regulates the behavior of the Changeover task in InfoPlus.21, on the event that communication with the secondary node comes up first, to decide whether it should become the one active by allowing reasonable time for the primary to restart. If the primary system does not come up during the timeout, then the secondary node becomes the active system. In this way Cim-IO reassures that the primary’s store file will be the one to be recovered. CIMIORescanLogicalDevices The variable indicates to Cim-IO client tasks whether redundancy is enabled or not. When enabled, the client tasks follow device reconfiguration made by the Changeover task as a result of a switch, thus the name of the variable. This variable must be set to (Redundant: Cim-IO Changeover used for a Redundant setup) for Cim-IO Redundancy to function. CIMIOSendCleanupCancels · Send only a DISCONNECT when cleaning up · Send CANCELs when cleaning up The scenario where this variable applies is as follows: When one of the redundant nodes has failed, the Changeover task must send a cleanup request to this node to cleanup all files and the connection. By default the task also send requests to cancel all declared unsolicited tags. If the switch happened as a result of a network failure, there is no communications with the node and therefore every attempt to cancel tags will be timed out. If the number of unsolicited tags and transfer records is significant, the cancel requests could delay considerably the overall changeover operation. For cases like this, this variable provides an option to skip cancelling unsolicited tags as part of the cleanup. CIMIOSFRejectOldData This S&F specific variable takes the following values: · Accept old data from Store and Forward · Reject old data from Store and Forward - All data will be deleted from store file for all tags!!! Two of the scenarios where this variable applies are: 1. After recovering from a failure the single Cim-IO server or, in a redundant configuration, one or both Cim-IO servers forward S&F files with data stored during the failure. In this scenario, the secondary node may be re-sending data from a file that has already been sent from the primary node. 2. You decide to manually demand the recovery of a previously saved S&F data file. The variable CIMIOSFRejectOldData regulates the behavior of the Cim-IO client tasks when you are re-recovering data either automatically or on demand from a backup set of store files, some of which may have already been inserted into the InfoPlus.21 database. Reject old data from Store and Forward causes the Cim-IO client to skip any previously added store files. As soon as the client tasks detects that while processing forwarded data for a tag, a sample is detected with a timestamp older than the timestamp of the most recent value inserted, the entire store file will be rejected. Accept old data from Store and Forward causes the Cim-IO client to process all the files being recovered. Configure CimIO records: The Device record, and External Tasks need to be created, configured, and added to the Aspen InfoPlus.21 Manager following the same guidelines as for non redundant systems. Create a Cim-IO Redundancy task: Create and configure a Redundancy task record defined by IoExternalFTDef record if not already defined. Typically this record is named TSK_Detect although other names are acceptable. Note: a Redundancy task can manage up to 7 logical devices if additional devices are required then additional Redundancy task will need to be created. Below is an explanation of repeat area fields in TSK_Detect. IO_DEVICE Cim-IO logical device name of the Cim-IO client IO_DEVICE_PROCESSING Activate Cim-IO redundancy changeover processing for this device by setting the field to "ON". IO_TIMEOUT_VALUE Timeout value used internally for detecting failure of the Cim-IO server. Recommended Setting +00:00:05.0 IO_FREQUENCY The frequency used for checking Cim-IO server. The recommended setting is twice the value of the IO_TIMEOUT_VAUE. IO_TAGNAME If this field contains a non-blank tag name, the quality status of the returned value will be checked to determine failure. A failure will be reported for any value other than CIMIO_STATUS_GOOD. IO_FAILBACK ‘ON’ allows automatic switching from Secondary to Primary if the Secondary fails. ‘OFF’ prevents switching back from Secondary to Primary. Switching from Primary to Secondary is always enabled. Set IO_FAILBACK to ON unless you have some other method of implementing failback. IO_STORE_ENABLE? NO/YES. When set to ‘YES’ prevents a switch over during Store and Forward recovery. Note: even though you will not lose incoming data if your forwarding node crashes, you will lose the ability to control the end device because failover will still be pointing to the failed node until it recovers and forwards its data IO_PRIMARY_DEVICE Unique Cim-IO logical device name for the permanent connection between the redundancy task and the primary Cim-IO server IO_SECONDARY_DEVICE Unique Cim-IO logical device name for the permanent connection between the redundancy task and the Secondary Cim-IO server IO_RESET This field is intended to provide a way to switch between devices when IO_FAILBACK is OFF. ‘Force Primary’ forces a reset to the Primary device. ‘Force Secondary’ forces a reset to the Secondary device. ‘Complete’ Indicates reset is complete or inactive. Please note that a forced failover takes precedence over everything else. As a result, if you are receiving stored data from a node, and you force a connection to another node, the forward operation will be interrupted, even if IO_STORE_ENABLE? = ‘ON’. IO_ACTIVE_DEVICE Displays the current active device. This field is only changes when a device switch occurs. The field is only changed after a successful switchover. This field can be monitored for COS activations for user-specific changeover logic. IO_LAST_UPDATE Time of last switch over. IO_PRIMARY_STATUS Current status of Primary device. IO_SECONDARY_STATUS Current status of Secondary device. Add the redundancy changeover task TSK_Detect to the Aspen InfoPlus.21 Manager defined tasks list so it starts AFTER the Aspen Cim-IO client tasks. Keywords: failover References: None
Problem Statement: Where is the configuration of BCI Program Options stored?
Solution: BCI Program Options Configuration information is stored in \HKEY_CURRENT_USER\Software\Aspentech\PimsBCI\Settings\, as observed bellow for example for file RREP Input: Keywords: None References: None
Problem Statement: Is it expected to have different solutions with Aspen Refinery Multi-Blend Optimizer SBO and MBO Planning Optimization?
Solution: SBO and the Planning MBO solve different problems and it is normal to have different solutions, it is important to explain the differences in both approaches. SBO with the min component cost objective function simply maximizes as shown bellow: (BlendSalesPrice+VolBlend) - SumOf(CompCost*CompVolUsedInBlend) The planning objective function does not consider the component costs in the same way as SBO or the operational objective function. In the planning objective function there is a cost for PRODUCING the component that comes from the product (CompRunDown*CompProdCost).In other words, it does not matter how much you use for a specific blend, what matters is the cost the refinery is incurring when producing the component Keywords: None References: None
Problem Statement: What is the new Template Migration Tool and how does it work?
Solution: This new Tool can be accessed by going to Menu Add-ins| AspenRpt8| Template Migration Tool. Note you need to ensure that you have selected Aspen Report Writer as an Excel add-in in order to observe this option. In V11, Aspen Report Writer was enhanced to support use of 64-bit Windows, you can use this tool to migrate existing templates to new templates. To migrate existing templates: Start Excel and click on Add-ins | AspenRpt8. From the drop-down, select Template Migration Tool. The Report Writer Template Migration Wizard appears. Click on the appropriate tab to indicate if you are migrating one or more than one template. Choose the location of the template(s) you will be migrating. Choose the location for the new migrated template(s), in this folder a Migration Log "MigrateLog" file with information of the process will be generated. Click OK. Keywords: None References: None
Problem Statement: Can Aspen Unified PIMS co-exist with other versions of Petroleum Supply Chain Products?
Solution: Aspen Unified PIMS can co-exist only with same version of PSC products. For example, Aspen Unified PIMS V11 should not be installed on the machine that has other PSC products of V10 or any other version different to V11. Keywords: None References: None
Problem Statement: Is there a way to insure I can correctly discriminate when an event starts/ends and when the Prep/Post times are represented in the Gantt Chart?
Solution: As shown in the following steps if you are adding Prep and/or Post times, you can adjust the Gantt bar thickness to insure you can correctly discriminate when the event starts/ends and when the pre/post times are represented on the Gantt chart. In demo model add prep and post hours information to an event as shown in the example: Go to Gantt & Trend Chart Options and increase Gantt Bar Thickness as shown in the image: The result will be as follows: Keywords: None References: None
Problem Statement: KB
Solution: Sample Macro: Automation of Aspen HYSYS with Python and VBA provides with some code to launch Aspen HYSYS using Python but does not invoke the application in Run time mode. The user can apply the following code for Run Time mode. Solution HYSYS runtime can be called from Python by making use of "HYSYS.Application.NewInstance.Runtime.VX.X". An example of this usage would be: import win32com.client hyApp = win32com.client.Dispatch('HYSYS.Application.NewInstance.Runtime.V10.0') hyApp.Visible = True hyCase = hyApp.SimulationCases.Open('filepath') hyCase.Activate() Key words HYSYS, Automation, PYTHON Keywords: None References: None
Problem Statement: How do I get the Twisted Tubes dll (Koch Heat Transfer dll ) from Koch?
Solution: Shell and Tube Exchanger can link with the KHT TT dynamic link library (dll) to model such exchangers. Once installed on a computer equipped with Shell and Tube, then from Input | Exchanger Geometry | Tubes | Tubes tab, "KHT twisted tubes" can be selected as the "Tube type". Requests for the KHT dll should be made to KOCH. The email for contacting KHT are given below: [email protected] and [email protected] Keywords: None References: old articles: 1. KB Article ID: 000033353 Twisted-tape insert in Aspen Shell & Tube Exchanger (Above article is the part of this reference article) 2. KB Article ID 000033041 (this article is removed from the website as the older e-mail is no longer exist from Koch). The new e-mail ID is as marked above.
Problem Statement: Some User Tools are not working for schedule group users in an APS/MBO model is this normal?
Solution: This can be due to the settings in Table COMMAND_LIST, in this table you can find the column USERGROUP, this column you can add the name of the user group that can view or access the command from the tree. If access is required for more than one group, there must be a separate record for each group. Note that if this column is left blank, then there are no restrictions on access. In the following image there are 2 groups of users that will have access to use this tool MODELER and ADMIN. Keywords: None References: None
Problem Statement: How do I model two condensers model primary & secondary condenser with reflux drum using RADFRAC Column? The outlet of both the condenser's would together added to column as reflux
Solution: When the user wants to build the model with two condensers, where as the application is primary condenser shall condense but it may not condense 100%, so the provision has to be provided for the secondary condensers to save the product losses as in-terms of uncondensed vapours. In this example file, the two condenser system is modelled to show the possibility that the condensed liquid of both the condensers would go first to reflux drum & then the liquid shall be distributed as reflux & distillate. As mentioned in the below picture: To follow this workflow: 2nd Stage will behave as Primary Condenser, 1st Stage will behave as Secondary condenser, We have to remove the condensate from 2nd stage & return to 1st stage. By doing this, we are adding the condensed liquid of the primary condenser to secondary condenser, condensed liquid we are adding externally using pump around & we have already defined the reflux ratio as minimum so that the system will consider pump around as reflux. Here we will define the 1st stage to 3rd stage partial draw (which will work as Reflux & balance will be collected as distillate). In this case we will not define reflux ratio (under RadFrac Column Specifications) or if its required we will define very minimum like 0.001 for convergence & then the pump-around return will behave as reflux. Note: This is just a work around, user needs to cross check the total heat load & other details using normal RADFRAC model so that this design shall be verified, as user needs to do some trial on flow adjustments to get the required heat duty. Generally, designers condensers the heat duty of the primary condenser as total heat duty & then the thumb rule is assumed like 5 to 25 % to design based on the experience. In case of the vacuum applications, some user would also like to add air or non- condensable to check the losses to decide the secondary condensers requirements. The attached model shows the Total Condenser & the condenser as above to have same heat duty. Keywords: None References: None
Problem Statement: How do I import two or more FORTRAN Subroutine files in AspenPlus?
Solution: Please follow the steps below to include multiple fortran subroutines in Aspen Plus model: 1. Save all fortran files and aspen Plus file in a folder 2. Open Customize Aspen Plus VXX and find the folder of the files (please locate the path of the files saved) 3. Type ‘aspcomp subroutinename’ to compile each ‘.f’ file. Each’ .f ‘file will generate a ‘.obj’ file. For Example: (.obj file would get generated in the saved folder location). 4. Type ‘asplink filename’ to generate a ‘filename.dll’ file. All the .obj files within the folder will be included in this .dll file so that Aspen Plus will use all the fortran subroutine. You may need to provide the new name for this .dll file in above example its “list1” 5. Now once the .dll file will be created, you can check the same in folder. Now Create a ‘.opt’ file with notepad and type the dll file name in it. (open note pad, type the dll file name we created, save that file in the same folder with some other name & save that file as “.opt”. (Note pad file would be the path to link that dll file to Aspen plus). 6. Now open the Aspen Plus, type the opt file name in linker option in Run Settings in Aspen plus. This will help to connect two subroutine files in single model. Keywords: None References: None
Problem Statement: How do I resolve the “Input Warning 1231” in EDR (Exchanger Design & Rating)
Solution: Warning 1231: This is a warning to make user aware that the model has a full support baffle (blanking baffle) present (for S type rear heads). However, the distance beyond this baffle has not been specified by the User. Hence, the distance beyond this baffle has been bydefault assumed by EDR and is 227 mm (as shown in the input warning). If Users wants to remove this warning or we suggest to cross check if any modifications required in this distance, the path to modification is as below: User can update their own values under "Tube Supports--> Length of Tube beyond support/blanking baffle”, and rerun. The warning would be gone. Keywords: None References: None
Problem Statement: How do I remove “Input Error 1124: Data input for Tube Path layout is unacceptable” under Fired Heater Simulation?
Solution: Reason for getting Input Error 1124 is that, the program does not support to model a separated layout and horizontal tubes using a long furnace model. Information is available from the help. We suggest that to use the “well-stirred model” for Firebox calculation model to eliminate Error 1124. Keywords: None References: None
Problem Statement: How do I enter standard cubic meter per day as gas flow in a stream?
Solution: Sm3/day and Nm3/day do not represent the same measurement. Sm3/day is standard flow at 1 atm and 15°C where as Nm3/day is normal flow at 1 atm and 0°C. To enter the flow rate in Sm3/day, follow the steps below. 1. From the Home ribbon group select Unit Sets. 2. From the Units of Measure change the unit for Molar Flow to m3/d_(gas) and then click OK. Keywords: Molar Flow, Standard Gas Flow, Normal Gas Flow References: None
Problem Statement: In Aspen Refinery Multi-Blend Optimizer, Blend Details Option is Greyed Out, is this normal and how can I resolve this behavior?
Solution: This behavior is expected if the MBO model is not working properly and have simulation errors. If there are errors that stop the model from optimizing the Blend Details option will be greyed out as in the following image. To resolve this issue the model has to be free of Simulation Errors that can be found by double clicking on the Sim Errors option in the bottom right side of MBO GUI Keywords: None References: None
Problem Statement: Cim-IO for OPC can fetch the descriptions and engineering units for certain OPC servers. See CIM-IO Servers / Smart Data Types
Solution: for the list of servers. Previous version of Cim-IO for OPC uses the configuration files %CIMIOROOT%\io\opc\extensions.txt for configuration of smart data types. This article describes the configuration of the smart data types for current versions (v8.x, v9,x, and higher) of Cim-IO for OPC. Solution: For v8.x and v9.x, there are two ways of configuration of Cim-IO for OPC, Cim-IO for OPC properties or Cim-IO interface manager. For v10.x and higher Cim-IO interface Manager is the only method to configure Cim-IO for OPC. 1. Configuring Smart Data Types using Cim-IO for OPC Properties: (for V8..x and V9.x) - Open up Cim-IO for OPC properties and click on Configure Smart Data Types - Click on Add and add the OPC server, and then the parameters for descriptions and engineering units. The example below is for a Honeywell TPN OPC server. - Click OK and close Cim-IO for OPC properties - Restart Cim-IO for OPC service to apply the change 2. Configuring Smart Data Types using Cim-IO Interface Manager (for all versions V8.x and higher) - Open up Cim-IO interface Manager. Select OPC interface and click on Configure Smart Data Types - Add the OPC server, and then the parameters for descriptions and engineering units. The example below is for a Honeywell TPN OPC server. - Click on Add/Update and then Close. - Restart the Cim-IO manager service to apply the change Keywords: Cim-IO for OPC Smart Data types References: None
Problem Statement: How can I change the structure of a custom definition record without having to delete the records defined against it? Note: Do not use this article to change the structure of standard Aspen InfoPlus.21 definition records. Changes made to standard Aspen InfoPlus.21 definition records will not be supported by Aspen Technology, and your changes will overwritten the next time you upgrade your snapshot.
Solution: Use the Aspen InfoPlus.21 Administrator to duplicate the custom definition record and make the duplicated record unusable. Next, start the Aspen Definition Editor to modify the duplicated custom definition record. After saving the changes and making the new custom definition record usable, open the Aspen InfoPlus.21 Administrator to create test records defined against the custom definition record to verify the changes you made. When you are satisfied your changes work properly, stop Aspen InfoPlus.21. Be sure to leave your modified definition record usable. Use Windows Explorer to navigate to the Aspen InfoPlus.21 Group200 folder and rename the Aspen InfoPlus.21 snapshot saved when stopping Aspen InfoPlus.21 to infoplus21_backup.snp. Also, navigate to ...\ProgramData\AspenTech\InfoPlus.21\c21\h21\dat and copy (not rename) map.dat to map_backup.dat. Next, navigate to the Aspen InfoPlus.21 code folder and start redefinewizard.exe. Acknowledge the warning message displayed by the Redefine Wizard and browse to infoplus21_backup.snp when the wizard prompts for a snapshot file name. After choosing your backup snapshot press the Next button and select the names of your old and new definition records, and press next. Finally, browse to your Group200 folder, and enter InfoPlus21.snp (or the name of the snapshot you use to start Aspen InfoPlus.21). At this point, the Aspen Redefine Wizard upgrades your snapshot, moving the records defined against your old definition record to the new one and modifying any references to your old definition record to the new one (including Aspen Process Explorer mapping records defined by AtMapDef). The Redefine Wizard also changes the file map.dat to reflect any changes made to history repeat areas. Now, restart Aspen InfoPlus.21 and find your new definition record using the Aspen InfoPlus.21 Administrator. Verify your changes, that the Redefine Wizard moved the data records to the new definition record, and that you can view history. If you are satisfied with your changes, make the old definition record unusable and delete it. Then rename the new definition record to the old name. If there are problems, you can restore the database to its former state by stopping Aspen InfoPlus.21, renaming infoplus21_backup.snp. to the snapshot you use to start Aspen InfoPlus.21, and restoring map_backup.dat to map.dat. Note: This procedure moves all the records defined against the old definition record to the new one. Note: The Redefine Wizard allows you to add fields to a definition record, change the length of character or integer fields, and add precision to real fields; however, you cannot change data field types from, for example, real to integer or integer to real. Keywords: Redefine wizard Definition editor References: None
Problem Statement: The header file setcim.h contains data types like DTYPREID, DTYPSHRT, DTYPLONG, and DTYPREAL to be used in the datatype argument in Aspen InfoPlus.21 API routines. Missing is a data type for character or string data. What data type do you use when accessing character strings using the Aspen InfoPlus.21 API?
Solution: Use the size of the character field for the datatype argument when accessing character strings using the Aspen InfoPlus.21 API. Keywords: String Character Datatype API References: None
Problem Statement: How can I convert a timestamp in Aspen InfoPlus.21 format to UNIX Epoch Time (i.e. number of seconds since 01-JAN-1970)?
Solution: Attached to this solution is a file named IP21TimeToUnixEpochTime.txt. Copy the contents of this file to the Aspen SQLplus query writer and save the query as a record defined by ProcedureDef named IP21TimeToUnixEpochTime. IP21TimeToUnixEpochTime has one calling parameter: tsIP21 - An Aspen InfoPlus.21 time stamp. The function returns the equivalent UNIX Epoch Time relative to your time zone. Keywords: unix epoch References: None
Problem Statement: When using AspenOne Process Explorer (A1PE) Alerting (Alert Subscriptions) what are the Alert email customisation options? For example, can the Alert email subject line that is received be customised and can multiple email addresses be specified?
Solution: Currently, the alert email subject field is system generated and cannot be modified in any way. The E-mail recipient design is on a per user basis. It’s configuration relies on a corresponding IP.21 record defined by AlertUserDef where each record is particular to an individual user. AlertUserDef user records are automatically created through A1PE Alerts configuration. NOTE: although the input field for E-mail address is capable of holding many characters and the corresponding AlertUserDef record field IP_EMAIL_ADDRESS is a max 50-character field the intention here is for the entry of only 1 email address to be entered. Multiple email addresses should not be entered as they will not work. Example A1PE Alert Subscription Email configuration screen: Corresponding AlertUserDef user record generated in IP.21: For a detailed description of all the fields in an AlertUserDef record please refer to: https://esupport.aspentech.com/S_Article?id=000046547 KeyWords: SMTP Alert Subscription Email subject heading Email To: Specifying multiple email recipients Keywords: None References: None
Problem Statement: How can I prevent users from writing to the Aspen InfoPlus.21 database via Aspen Excel Add-in and/or aspenONE Process Explorer?
Solution: Use the AFW Security Manager to import the following application XML file: AspenMESClientApplications_AFW.xml located in the C:\Program Files (x86)\AspenTech\ProcessData directory... From there you must select the Excel Add-Ins subgroup and edit the following securable object (Standard Write) to allow or deny the Write permission: The same procedure applies to the Web Clients application, which has three securable objects: Standard Write, OEE Record Management and OEE Event Entry. Keywords: excel addin process data afw security manager aspenONE Process Explorer A1PE References: None
Problem Statement: When performing a test connection from the online Server, the tags show red and the CIMIO_USR_GET_RECEIVE error message is outputted.
Solution: An RTE Controller is an application that continuously reads and writes to the OPC via CIM IO. The test connection would be a hurdle in order to establish a steady connection to get data from the OPC. In this case, the error states that the RTE controller is not receiving a reply from CIM IO in a timely manner. The solution is to configure a bigger timeout value to allow a bigger time window to receive data. This is done through Configure online server. Open Configure Online Server Go to the IO section Select the IO source in use by the controller. Change the timeout to 35. The default for CIMIO time out is 15 sec. A read or write will wait 15 sec beofre disconnecting and marking all the data BAD. This is different behavior vs ACO. ACO does not mark the data BAD. Instead, it skips and wait until the next cycle. The timeout allows RTE to extend that wait. Sometimes, it is necessary to recreate the complete IO source for the Timeout change to take effect. To see the actual value that the IO source is using, do the following: open task manager Go to details view and enable the command line. Right click on the column name and do select columns Seize the AspenTech.ACP.IO.CimioClienDriver.exe On the command line column, you will be able to see the name of the IO source and the configured timeout. See the image below. Extend the timeout is not necessarily equal to speed up the READ. It would, most of the time, extend the time it takes to go through the controller cycle. If you have a controller that runs 1cycle/min, having a timeout close to 60 sec is not ideal. To increase the read speed, you can increase the Frequency value in the same way as timeout in Configure Online server, that will enable cache reads. Keywords: …Test Connection, DMC3 Builder, Configure Online server References: None
Problem Statement: How does TSK_DETECT determine when to fail over to a redundant node?
Solution: The Aspen Cim-IO utility CimIOProperties allows you to set several registry values to modify Cim-IO behavior. In particular, Aspen Cim-IO client tasks (e.g. TSK_M_devname, TSK_A_devname, and TSK_U_devname) use the parameters CIMIOPingFrequency and CIMIOMaxPingFailures to determine when to set the field IO_LAST_STATUS to "Server or S&F Shutdown" in Aspen InfoPlus.21 IO Transfer records. The Aspen Cim-IO client tasks attempt to ping their Cim-IO nodes at the frequency specified in CIMIOPingFrequency. The setting CIMIOMaxPingFailures determines the maximum number of consecutive ping failures before the Aspen Cim-IO client tasks set the IO_LAST_STATUS to "Server or S&F Shutdown" in Aspen InfoPlus.21 IO Transfer records. The Aspen InfoPlus.21 external task TSK_DETECT is responsible for failing an unresponsive Cim-IO node to a redundant node. TSK_DETECT does not use the the settings CIMIOPingFrequency and CIMIOMaxPingFailures. In other words, CIMIOPingFrequency and CIMIOMaxPingFailures have no affect in determining when TSK_DETECT fails from one node to another. Each redundant device has an occurrence in the repeat area IO_#TAGS in the record TSK_DETECT. Each occurrence has two fields (IO_FREQUENCY and IO_TIMEOUT_VALUE) that determine how often TSK_DETECT pings the active node and how long TSK_DETECT waits for a reply from the ping command. TSK_DETECT first performs an ICMP (network ping) to both the primary and backup nodes to determine if the nodes are active. If they are, then TSK_DETECT pings the Cim-IO server tasks (i.e. the dlgp, store, forward, and scanning processes). If either ping fails, TSK_DETECT forces a fail over from the active node to the secondary node. The field IO_TIMEOUT_VALUE establishes the maximum amount of time TSK_DETECT waits for the ping command to complete. A message saying a host could not be found or a request timed out or the destination is unreachable is a valid ping reply, and the ping command completes. As soon as TSK_DETECT receives a reply from the ping command indicating the Cim-IO node is not available or the Cim-IO server processes are not active, TSK_DETECT will force a failover. If, because of network traffic or some other condition, TSK_DETECT receives no reply from the ping command in the amount of time indicated in the IO_TIMEOUT_VALUE field, then TSK_DETECT will force a failover. Since ping commands return quickly, IO_TIMEOUT_VALUE should never come into play. IO_TIMEOUT_VALUE does not establish a grace period for a remote node to come on-line. For example, if IO_TIMEOUT_VALUE = 30 seconds and the ping command returns "destination not reachable" in one second, then TSK_DETECT does not wait for 29 seconds for the CIMIO node to become available but rather begins the failover process immediately after receiving the "destination not reachable" message. Keywords: CIMIOPingFrequency CIMIOMAXPingFailures IO_LAST_STATUS TSK_DETECT IO_FREQUENCY IO_TIMEOUT_VALUE Server or S&F Shutdown References: None
Problem Statement: Access to entries can be restricted by following three entry types on security display on configuration tab in Aspen Production Control Web Server (PCWS). Standard Operator Engineer This KB solution describes how are these access levels assigned.
Solution: The new Security page in PCWS has check boxes for entry classifications: Standard Entries, Operator Entries and Engineer Entries. These entry classifications can be determined by looking in the [product].product.config files as follows (e.g. dmcplus.product.config or aspeniq.product.config): - Standard Entries are entries with changePermission="None". - Operator Entries are entries with changePermission="operatorChange". - Engineer Entries are entries with changePermission="engineerChange". Standard Entries do not have any "W" (Write) permission (because they correspond to entries with changePermission="None"). Therefore, the W checkbox does not have an effect on Standard Entries. Note that the legacy, [product]entrydata.xml files are only used by the CORBA-based "Aspen ACO View Data Provider". They are no longer used by the WCF-based "Aspen APC Web Provider Data Service". The WCF-based service uses theses product.config files instead. There is no list of entries for each category as this list can change by either overriding the changePermission (in a user.config file) or if the changePermission in the product.config files changes when a new product release or patch is delivered. Example The user is running a controller in RTE and wants to enable write access to the operator the Initialize Prediction switch for the CVs and the application mode. The solution is to edit the APC.user.display.config file under C:\ProgramData\AspenTech\APC\Web Server\Products\APC to give operator read/write access to SS_ActivateTesting and InitializePredictionsExt. Below is a screenshot of the code And the result from the PCWS when the Operator is logged: Keywords: WCF, CORBA, dmcplus.product.config, Configuration, Security, Standard Entries, Operator Entries, Engineer Entries References: None
Problem Statement: The AspenProcessDataAppPoolx64 stops due to repeated application failures. The Windows Application Event Log shows APPCRASH errors for AtProcessDataRest.dll.
Solution: This problem may be caused if a mapping record defined by ATMapDef has blank entry for the field MAP_CategoryFormat. This field formats the field MAP_Category in the repeat area MAP_#Categories. To test, execute the following query on each Aspen InfoPlus.21 server serviced by the a1PE server: select name, map_categoryformat from atmapdef where map_categoryformat is NULL; Enter a record name from a record defined by AtMapSelectDef into each blank MAP_Category field for each record returned by the query. If unsure, try IP21_Gen_MapC. Keywords: Mapping record AtProcessDataRest.dll crash AspenProcessDataAppPoolx64 References: None
Problem Statement: The Aspen SQLplus function XOLDESTOK displays the "oldest allowed history timestamp" for a tag. History values can be inserted back to this time but no further. How does Aspen InfoPlus,.21 calculate this value?
Solution: There are two scenarios to consider: Creating a record using a never before record ID Creating a record using a record ID previously used by a tag that collected history values Using a never before used record ID If a record is created using a never before used ID, the "oldest allowed history timestamp" value is calculated by subtracting the history repository's Past Time parameter from the time the record was created. For example, if a new tag is created on 01-JAN-20 01:00:00 and the repository's Past Time parameter is set to 365 days, then the "oldest allowed history timestamp" value for the tag would be 01-JAN-19 01:00:00. Re-using an old record id If a record is created re-using a record ID formerly used by a tag that collected history , the "oldest allowed history timestamp" value is set to the timestamp value plus one microsecond of the most recent history occurrence from the old record or is calculated by subtracting the history repository's Past Time parameter from the time the record was created, whichever is smaller. For example, if a tag created on 01-JAN-20 01:00:00 uses a record ID of a tag with history values up to 01-DEC-19 01:00:00, the the "oldest allowed time" for the new tag would be 01-DEC-19 00:00:00.00001. If the most recent history recording of the old tag was 01-DEC-18 00:00:00, then the "oldest allowed history timestamp" would be set to 01-JAN-19 01:00:00 (01-JAN-20 minus 365 days). Thus, history values that were stored for the old record will not be visible from the new record. Keywords: record id recid XOLDESTOK References: None
Problem Statement: Lab Sampling value is not implemented to update IQ bias and the Combined Status is always No Update Method. This problem can be caused by setting LBUPDMETH(Lab Bias Update Method)=1, which means Lab value is only validated and not used to update bias.
Solution: Update the IQF file and reload as follow steps: 1. Open the *.iqf file in IQconfig. 2. Right click the LBU module of you IQ and select Properties context menu. 3. At Lab Update dialog, select Traditional or Scores method. 4. Save the iqf file and reload it at PCWS > Manage dialog or APCManage utility. 5. Check the Combined Status of Lab Data Previous Samples for new lab samples are Waiting or Good, Lab Bias is calculated with Raw Prediction and Lab Sample. Key Words IQ, Lab Update, No Update Method, LBUPDMETH Keywords: None References: None
Problem Statement: If you activate best fit storage for a point defined by IP_AnalogDef or IP_DiscreteDef, Aspen InfoPlus.21 will store the first and last values in each 30-minute period for the tag along with the tag's maximum and minimum values in the interval. The best fit data can then be used by aspenONE Process Explorer to plot data with large time spans. The problem is that Aspen InfoPlus.21 has to collect best fit data for more than a year before it becomes useful to aspenONE Process Explorer. If you have trend data going back several years, this article explains how you can back populate the best fit repository using your existing trends.
Solution: Aspen InfoPlus.21 has a utility named h21asctoarc that creates file sets based on the contents of a text file. Attached to this article is a query named bfdata.txt that creates the text file required by h21asctoarc. Copy bfdata.txt to the Aspen InfoPlus.21 server and rename the file to bfdata.sql. The query assumes that you have no best fit data for the back-fill time period, and that you have activated BF data collection for your IP_AnalogDef and IP_DiscreteDef tags by setting the field IP_BF_REPOSITORY to TSK_DHIS_AGGR and setting IP_BF_ARCHIVING to ON. Start the Aspen SQLplus query writer and open the query bfdata.sql. Before running the query, set the default timeout of the query writer to 0 to disable timeouts: The query first asks for the starting time for data collection. Since we are collecting best fit data in 30-minute intervals to mimic InfoPlus.21 best fit processing, the starting time starts on an even hourly boundary. Bfdata then asks for the ending time for data collection. Care must be taken not to enter a time stamp that overlaps existing TSK_DHIS_AGGR file sets. After confirming your choice: bfdata asks for the timezone GMT offset to identify the time zone of the InfoPlus.21 server. For example, if you were using this query at 3:00 PM on September 28, 2017 CDT, then you would enter -5. Finally the query prompts for the name of the output file which will be stored in the Group200 folder. The default file name is bfdata_startingtime_to_endingtime.txt. Bfdata then loops through all IP_AnalogDef and IP_DiscreteDef records having the field IP_BF_REPOSITORY set to TSK_DHIS_AGGR. Therefore, it is important for properly configure tags for best fit data collection before running the query. When the query finishes, you will see confirmation messages in the output area of the query writer: The query places the output file in the InfoPlus.21 group 200 folder. Next open the InfoPlus.21 Administrator to find an empty file set in the repository TSK_DHIS_AGGR. This example uses file set 3. Turn off the repository TSK_DHIS_AGGR. Any best fit data collected while the repository is off will be buffered to the file event.dat in the repository’s root folder. Open a command window as an administrator on the InfoPlus.21 server, navigate to the InfoPlus.21 Group200 folder, and find the output file: H21asctoarc is in the folder C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin (note: The actual drive may differ). The command arguments expected by h21asctoarc are: -rTSK_DHIS_AGGR (to name the repository name) -a3 (to specify the file set number) -fbfdata_20170701000000_to_20170927102341.txt (to identify the file). Note: The repository name must be in upper case. Run h21asctoarc as follows: C:\”Program Files”\AspenTech\InfoPlus.21\c21\h21\bin\h21asctoarc.exe -rTSK_DHIS_AGGR -a3 -fbfdata_20170701000000_to_20170927102341.txt (note the double quotes around Program Files). H21asctoarc prints informational messages for each tag in the file before finally finishing with output similar to: Restart InfoPlus.21 and verify the existence of the new file set. Keywords: Best Fit Back Fill Back Populate bfdata bfdata.sql References: None
Problem Statement: How to build a nonlinear model in DMC3 Builder for IQ and Apollo?
Solution: Nonlinear MISO model is only supported by APC Project type. You can build and export nonlinear models by follow step: Create a APC Project at DMC3 Builder. Import dataset to DMC3 Builder and create MISO type model. Select Nonlinear at Model Type Selection dialog. Make sure checking Identify separate MISO output models, then specify dataset, input and output in the Identify Model dialog. Click Build Models of Model Settings group, it pops up the dialog Edit MISO Models. Select the model type you want and click Configure button to invoke Identify dialog, check the help topics for these model type identification. After identification completed, click Export button at the Edit MISO Models dialog. Select the file type to save it. ANL and ADY for Nonlinear controller (Apollo), IQR for IQ application. Keyword: DMC3 Builder, IQ, Nonlinear, Model Keywords: None References: None
Problem Statement: How to import XML Spiral files into Aspen HYSYS Petroleum Assay Manager?
Solution: In V11, it is now possible to import XML Spiral files to use in Aspen Assay Manager (AAM). Please follow the steps below to import spiral XML files. 1. Open a new case in Aspen HYSYS V11 2. Create a component list. There are 3 options: a. Create list from HYSYS database, adding item by item. b. Import list from an existing file (.cml) c. Adapt your component list using one of HYSYS library component lists as starting point. 3. Select the fluid package for your simulation 4. Select Petroleum Assay and from the Ribbon, select the option to import (Import Spiral XML). 5. Select the desired XML file. 6. Aspen Assay Management will import the file and maintain the characterization from Spiral, unless the user decides to later recharacterize it in Aspen HYSYS using the property package previously selected. A report will appear with details about the import. Click OK. 7. In the folder Petroleum Assay, the input and conventional results will be available in the Properties Environment for revision. 8. The Assay will then be available for usage in the Simulation Environment. Keywords: Spiral; XML; Aspen Assay Manager; Characterization; AAM; References: None
Problem Statement: How can I move all Aspen Calc calculations from a folder into a schedule group?
Solution: Attached to this knowledge base article is a query named MoveCalculationsToScheduleGroups.txt. Copy the contents of the query into the Aspen SQLplus query writer and execute the query. The query prompts for an Aspen Calc folder name and the name of a schedule group. After receiving confirmation that your answers are correct, the query places all the calculations contained in the folder into the schedule group. Key Words folder schedule group Keywords: None References: None
Problem Statement: This article describes an effective strategy for saving Aspen InfoPlus.21 database snapshots.
Solution: Since Aspen InfoPlus.21 uses a memory-resident database, it is very important to save snapshots of the database at regular intervals and when Aspen InfoPlus.21 shuts down. Aspen InfoPlus.21 uses a snapshot to restore the memory-resident database when starting. You should also manually save snapshots before performing database maintenance. Once you have saved different snapshots, you can specify them as alternate snapshots for TSK_DBCLOCK to use in case the default snapshot InfoPlus21.snp is corrupt. Saving snapshots when Aspen InfoPlus.21 shuts down. Be sure the field FILE_NAME in the record TSK_SAVE is set to InfoPlus21.snp. If you do not specify a path, TSK_SAVE uses the Aspen InfoPlus.21 Group200 folder as the default. The process TSK_SAVE saves the memory-resident database to InfoPlus21.snp in the Group200 folder when stopping Aspen InfoPlus.21. When Aspen InfoPlus.21 starts, by default, TSK_DBCLOCK uses the snapshot InfoPlus21.snp located in the Group200 folder to restore the memory resident data base. You can see this in the field LAST_SNAPSHOT_LOADED in the record TSK_SAVE. Saving snapshots at regular intervals to the Aspen InfoPlus.21 Group200 folder The Aspen InfoPlus.21 database contains a record named SAVE_SNP. By default, SAVE_SNAP schedules hourly database saves to the snapshot InfoPlus21.snp located in the Group200 folder. See our best practices article 137475 for more details. You should also create records defined by DataBaseSaveDef to save snapshots at times other than SAVE_SNAP. For example, you could create a record named HourlySave to save a snapshot named InfoPlus21_Hourly.snp at the bottom of each hour as follows: Likewise, you could create a record defined by DataBaseSaveDef named DailySave to save snapshots once a day as follows: Regularly save snapshots to a disk drive not used by Aspen InfoPlus.21. This protects your database in case the Aspen InfoPlus.21 drive fails. In this situation, specify a path before the snapshot name. In the following example, the record DailySave1 saves a snapshot to C:\IP21_Backups\Snapshots\DailySave.snp. Manually save snapshots prior to making database changes. To manually save a snapshot, open the Aspen InfoPlus.21 Administrator, right click on the Aspen InfoPlus.21 database name, and select Save Snapshot. Use the Aspen InfoPlus.21 Manager to specify alternate snapshots to use in case InfoPlus21.snp is corrupt. After saving multiple snapshot copies, you can specify a list of alternate snapshots for TSK_DBCLOCK to use in case InfoPlus21.snp is corrupt. Open the Aspen InfoPlus.21 Manager, double-click on TSK_DBCLOCK in the Defined Tasks pane, and then click on the Snapshots button that appears next to the Restart Settings button. Pressing the Snapshots button opens the Configure Snapshots Lists dialog window that allows you to browse to alternate snapshots. . Allow TSK_SAVE to maintain the alternate snapshots list TSK_SAVE by default updates the alternate snapshots list each time it processes a record scheduled by DataBaseSaveDef. The snapshots are listed in reverse chronological order (i.e. from most recent to oldest). If you stop Aspen InfoPlus.21 normally, the snapshot created when Aspen InfoPlus.21 stops, will be the most recent snapshot and will used when Aspen InfoPlus.21 starts. If the server stops without first stopping Aspen InfoPlus.21, then TSK_DBCLOCK will use the most recent snapshot saved by TSK_SAVE. Uncheck the box "Loading Order By time" to disable this feature. Keywords: None References: None
Problem Statement: Aspen InfoPlus.21 may not start when using Symantec Endpoint Protection, and the Aspen InfoPlus.21 Task Manager Service executable (tsk_server) may consume excessive CPU time degrading Aspen InfoPlus.21 performance.
Solution: Symantec Endpoint Protection may block Aspen InfoPlus.21 executable images from running with no error logs indicating that the programs were blocked from running or may cause Aspen InfoPlus.21 processes to perform poorly. AspenTech does know not know why Symantec Endpoint Protection may block Aspen InfoPlus.21 processes from running. To work around this problem, place any executable processes related to Aspen InfoPlus.21 into the exclusion list for Symantec Endpoint Protection. You must place the names of the actual executables into the exclusion list; it is not enough to place the Aspen InfoPlus.21 code folder into the exclusion list. Following is a list of executables normally started by Aspen InfoPlus.21; however, this list is not exhaustive. Your Aspen InfoPlus.21 server may start other programs not in this list. Look through the defined task list of the InfoPlus.21 Manager for the executables used by your Aspen InfoPlus.21 server. Depending on your operating system, except for h21prime and h21archive, these executables are located either in C:\Program Files\AspenTech\InfoPlus.21\db21\code or C:\Program Files (x86)\AspenTech\InfoPlus.21\db21\code. H21prime and h21archive can be found in C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin or C:\Program Files (x86)\AspenTech\InfoPlus.21\c21\h21\bin. Note: If you did not install Aspen InfoPlus.21 using the default paths C:\Program Files\AspenTech\ and C:\Program Files (x86)\AspenTech\, then then paths to the Aspen InfoPlus.21 executables will be different then what is listed below. Name of executable image to exclude C:\Program Files\AspenTech\InfoPlus.21\db21\code or C:\Program Files (x86)\AspenTech\InfoPlus.21\db21\code Used to start ... tsk_server.exe InfoPlus.21 Task Service dbclock.exe TSK_DBCLOCK h21prime.exe (located in C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin\ or C:\Program Files (x86)\AspenTech\InfoPlus.21\c21\h21\bin\) Initializes repositories h21archive.exe (located in C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin\ or C:\Program Files (x86)\AspenTech\InfoPlus.21\c21\h21\bin\) Archiving program for each repository plantap.exe TSK_PLAN savedb.exe TSK_SAVE h21task.exe TSK_H21T h21arcbackup.exe TSK_HBAK kpi_task.exe TSK_KPI hlth.exe TSK_HLTH ip21servicehost.exe TSK_ACCESS_SVC ip21OPCuaserverhost.exe TSK_OPCUA_SVR cimq.exe TSK_CIMQ infoplus21_api_server.exe TSK_ORIG_SERVER, TSK_ADMIN_SERVER, TSK_APEX_SERVER, TSK_EXCEL_SERVER, TSK_DEFAULT_SERVER, TSK_BATCH21_SERVER sqlplus_server.exe TSK_SQL_SERVER iqtask.exe Query tasks (TSK_IQ1, TSK_IQ2, etc.) sqlplusreportscheduler.exe TSK_SQLR tsk_clc1.exe TSK_CLC1 bgcsnet.exe TSK_BGCSNET iq.exe TSK_CHK_SCRATCH, TSK_ACTG_SYNC actg.exe TSK_ACTG actg_snf.exe TSK_SNFA, TSK_SNF2, TSK_SNF3, TSK_SNF4, TSK_SNF5 GoldenBatchProfiling.exe TSK_GBP cmon.exe TSK_CMON cmrpt.exe TSK_CMRP ceve.exe TSK_CEVE ip21alert.exe TSK_ALERT PatternMatchTask.exe TSK_PMON RootCauseTask TSK_ROOT_CAUSE tsk_erp.exe TSK_ERP opcua_c_client.exe TSK_OPCUA ReplicationSubscriberNG.exe TSK_SUBR ReplicationPublisherNG.exe TSK_PUBR cimio_c_client.exe Cim-IO Main Client Tasks (TSK_M_device) cimio_c_async.exe Cim-IO Async Client Tasks (TSK_A_device) cimio_c_unsol.exe Cim-IO Unsolicited Client Tasks (TSK_U_device) cimio_c_changeover TSK_DETECT You must also exclude all folders related to Aspen InfoPlus.21 including the root folders of the history repositories. Keywords: Virus Symantec Endpoint Exclude Exclusion References: None
Problem Statement: Why PRDMDLD is not Anti-Transformed in Aspen APC controllers?
Solution: PRDMDLD is the unbiased model prediction calculated in Aspen Watch for each controlled variable (CV) in Aspen DMCplus and APC Online control applications. One commonly used troubleshooting technique for prediction performance is to track the Controller CV DEP (Measurement) with PRDMDLD and observe any model mismatch. PRDMDLD (known as ModelPrediction in RTE applications) is an unbiased model prediction for the dependent variable current value. Since it is NOT bias updated, the value can significantly drift away from DEP (Measurement in RTE applications) over time. Due to this drift, Aspen Watch does not apply anti-transformation on PRDMDLD and keeps it on the transformed space because it can cause numerical issues or produce an invalid value. Therefore, for CVs with transforms it is always recommended that the user should use DEPA (TransformedMeasurement in RTE) instead of DEP (Measurement) when investigating the model prediction performance: DEPA vs PRDMDLD. DEPA is the transformed version of DEP. PRDMDLD is maintained in the Transformed space and cannot always be anti-transformed accurately if the value has drifted out of the valid range for the transformation. PRDMDLD is adjusted periodically (first day of every month at 00:00) to match the transformed measurement (DEPA) to avoid large deviations due to drift for this reason. Additionally, "Engine Measurement" (DEPA/VINDA) is automatically added into the History plots whenever a transform exists for a variable, such that the user can compare the PRDMDLD with the transformed measurement. Keywords: … PRDMDLD, Transform,Prediction analysis, DEPA, VINDA References: None
Problem Statement: What is the use of table POOLPROP and where can it be found?
Solution: This KB Article explains the use of table POOLPROP and where it can be found. Table POOLPROP is an internal table created during the generation of the Matrix that keeps track of the recursion structures and figures out the bounds for the Q – Variables. It can be a useful diagnostic tool to review where the bounds of a model come from. When using XLP, the pool collector columns will only show 999s in the validation reports. This is because those quantities are recursed values and are represented by quality variables in the full nonlinear model. 999 keywords are not replaced with PGUESS values because those values are not used as initial guess. You can see all the PGUESS values and the recursed property ranges in table POOLPROP This table is found under the Tables tab Internal Tables| POOLPROP. Keywords: None References: None
Problem Statement: Aspen Plus document (.apw) files contain an embedded backup (.bkp) file. Is it possible to extract the bkp file which is included in the apw file if, for example, the apw file is corrupted.
Solution: Here are the steps to recover the embedded archive .bkp file from an .apw file. If the .apw file is embedded in a compound (.apwz), this adds an extra step before recovering the bkp file. Change the extension of the .apwz file to.zip. Then, extract the .apw file from the .zip file. In this example, the recover file is Simulation1.apw which was saved in a folder named Recover in the desktop. The steps are as follows: 1. Copy the .apw file in a new folder, for instance in this example it was named Recover 2. Open a Command Prompt window 3. Then change directories to where the apstgutl.exe utility is located: For V11.0: cd \Program Files \AspenTech\AprSystem V11.0\GUI\xeq For V10.0 and previous: cd \Program Files (x86)\AspenTech\AprSystem VX.X\GUI\xeq VX.X depends on the version, V9.0, V8.8, etc. 4. Click enter and write as follows: apstgutl.exe c C:\Location of the file\name of the file.apw In this case for instance: apstgutl.exe c C:\Users\USER09\Desktop\Recover\Simulation1.apw 5. Click enter and it will generate a .bkp file in the same folder where the .apw file is located For Aspen Plus this will generate a .bkp and .apmbd file. For Aspen Properties this will generate an .aprbkp and .apmbd file. Click on Yes and enter to the zip file, there you will find an apw file if this was indicated by the user to save results in the apwz file. Take this apw file out to the folder and proceed to extract the bkp as instructed before. If the .apw file is embedded in a compound (.apwz), this adds an extra step before recovering the bkp file. Change the extension of the .apwz file to.zip. Then, extract the .apw file from the .zip file. Keywords: corrupted file, bkp, apw, apwz References: None
Problem Statement: If an output file exists, Aspen SQLplus overwrites the contents of the output file each time a query executes the command SET OUTPUT. This technical tip explains how to generate unique file names using the current timestamp so the results of a query are not lost from execution to execution.
Solution: Use the CAST FORMAT command to prepare a file name using a text string containing the date and time. By concatenating the UTC hour to the CAST FORMAT command, the file name remains unique even when changing from daylight saving time back to standard time. Use the function ISO8601 to convert current time to UTC format. The UTC hour is in characters 12 and 13. For example the query set output 'c:\results_'||cast(current_timestamp as char format 'YYYYMMDD_HHMISS')||'_'||substring(ISO8601(CURRENT_TIMESTAMP, 1) from 12 for 2)||'.txt'; created a file named c:\results_20200430_110245_16.txt at 11:02:45.2 AM CDT in the United States on April 30, 2020. The knowledge article How to display a four digit year when selecting process data explains how to use CAST FORMAT. Keywords: CAST FORMAT file name set output UTC ISO8601 substring References: None
Problem Statement: The following message is displayed in the “Run controls” menu when trying to run Blowdown analysis:
Solution: This is related with a convergence issue. Some general recommendations to resolve these issues can be suggested: Reduce timestep in “Run controls” menu Simplify the component list, by removing heavier components or water with small molar fractions Use the Blowdown template “Single Vessel BLOWDOWN - SingleVessel.blo” to avoid pressure drop calculations in piping upstream the orifice Simplify the pipe network, by lumping pipes together in single holdup, if possible. The user should determine which simplifications are reasonable for their simulations, and carefully evaluate the results. Keywords: Blowdown, convergence, warning, timestep, heavy components References: None
Problem Statement: What is the meaning of Warnings W737, W769 and W753?
Solution: This KB Article explains warnings W737, W769, W753 W737: The means the constant property data in BLNPROP has a row, for example, ALK1, but no row ALK. This can be a problem if the other periodic rows are also missing for the material. W769: The column in table BLNSPEC is not defined elsewhere in the model as a material or a group. Yes, you should remove the column (or define it elsewhere in the model, such as BLENDS and BLNMIX). W753: We only put a predefined number of warnings in the execution log itself (controlled by the reporting setting). This warning just lets you know there are additional instances of the mentioned warning in the warn.lst. You can resolve those warnings if you wish. Keywords: None References: None
Problem Statement: Where should the Aspen InfoPlus.21 record SAVE_SNAP save hourly snapshots?
Solution: By default, the Aspen InfoPlus.21 record SAVE_SNAP saves hourly snapshots of the Aspen InfoPlus.21 database to the file InfoPlus21.snp located in the Aspen InfoPlus.21 Group200 folder. Because the record TSK_SAVE instructs Aspen InfoPlus.21 to also save a snapshot of the in-memory database to InfoPlus21.snp located in the Group200 folder when Aspen InfoPlus.21 stops, some users change the file name of the snapshot in SAVE_SNAP to a different name in an effort to keep TSK_SAVE from overwriting the hourly snapshots generated by SAVE_SNAP. This is a mistake, and AspenTech strongly recommends leaving field FILE_NAME in SAVE_SNAP set to InfoPlus21.snp. Suppose SAVE_SNAP saved an hourly snapshot to a file named HourlySave.snp, and suppose that the Aspen InfoPlus.21 server crashed after running for six months meaning there was no orderly shutdown of Aspen InfoPlus.21. In this case, the most recent version of InfoPlus21.snp is six months old (because TSK_SAVE did not save a copy of the database when the server crashed), and that is the file that would be used by TSK_DBCLOCK when Aspen InfoPlus.21 restarts. Please see the knowledge base article Guide to creating an effective strategy for saving Aspen InfoPlus.21 database snapshots. Keywords: database backup References: None
Problem Statement: Can you delete an Aspen InfoPlus.21 history occurrence?
Solution: You cannot. You may insert an occurrence into history, you may modify a history occurrence, but you may not delete one. Key Words history repeat area delete insert add modify occurrence Keywords: None References: None
Problem Statement: How can I test if a port is open through a firewall between two servers (for example between an Aspen InfoPlus.21 and an Aspen Cim-IO server)?
Solution: One solution is to use telnet to test if a port is open between two servers. Simply open a command window as an administrator and enter the command: >telnet hostname portnumber where hostname is the remote server, and portnumber is the port to test. If telnet returns a blank screen, then the port is open between the two nodes; however, many times telnet is not installed on a server and clients are reluctant to install it or alternatives like PuTTY. An alternative is to use the System.Net.Sockets namespace in .NET Framework from PowerShell to check connectivity. To do that run PowerShell and execute the following statements: PS> $client = New-Object System.Net.Sockets.TcpClient(“<HostName>”, <PortNumber>) Where <HostName> is the remote server name and <PortNumber> is the port to test. A process on the remote server must be listening on that port. If the connection fails an error message will be displayed. If the connection succeeds the command prompt will be displayed. If the connection succeeds you should close the connection by issuing the following command: PS> $client.Close() Attached to this article is a Windows PowerShell advanced function named Test-Port that issues the .NET Framework commands. Download Test-Port.ps1.txt to your server to a folder and rename the file to Test-Port.ps1. Next, open Windows PowerShell as an administrator and navigate to the folder. Then enter the command: PS>. .\Test-Port.ps1 Note: There is a space between the two dots. This adds the command to the current environment and makes it available during the session like a built-in cmdlet. To use Test-Port enter PS>Test-Port hostname portnumber where hostname is the remote server, and portnumber is the port to test. Note: A process on the remote server must be listening to the port. If successful, Test-Port displays the message "Successfully connected to port portnumber on hostname". Otherwise Test-Port displays "Failed to connect to port portnumber on hostname". Help is available by entering PS>Help Test-Port Key Words Test-Port Telnet firewall check port Keywords: None References: None
Problem Statement: This Knowledge Base article provides query examples showing how to access the Aspen Production Record Manager (APRM) Administration and Configuration tables from Aspen SQLplus.
Solution: All the Administration and Configuration tables in Aspen Production Record Manager Administrator database are accessible from Aspen SQLplus starting with version 10.1. Note: The queries listed below may have to be modified to include your data source and area names. The data source, server and area names used in these query examples are: ODBC data source name: APRM_Link Server name: b14-s2012-1 Area name: sdemo -- Administration Tables SELECT * FROM "APRM_Link"."DataSources"; SELECT * FROM "APRM_Link"."License"; SELECT * FROM "APRM_Link"."ServerDiagnostics"; SELECT * FROM "APRM_Link"."Tables"; -- Configuration. Some queries require data source and area. If you leave it blank, then the default data source and default area are assumed and queried. SELECT * FROM "APRM_Link"."Aliases"; SELECT * FROM "APRM_Link"."Areas"; SELECT * FROM "APRM_Link"."Characteristics"; SELECT * FROM "APRM_Link"."Designators"; SELECT * FROM "APRM_Link"."KPIs"; SELECT * FROM "APRM_Link"."Subbatches"; SELECT * FROM "APRM_Link"."SPCs"; SELECT * FROM "APRM_Link"."Units"; SELECT * FROM "APRM_Link"."b14-s2012-1"."Areas"; SELECT * FROM "APRM_Link"."b14-s2012-1"."sdemo"."Aliases"; SELECT * FROM "APRM_Link"."b14-s2012-1"."sdemo"."Characteristics"; SELECT * FROM "APRM_Link"."b14-s2012-1"."sdemo"."Designators"; SELECT * FROM "APRM_Link"."b14-s2012-1"."sdemo"."KPIs"; SELECT * FROM "APRM_Link"."b14-s2012-1"."sdemo"."Subbatches"; SELECT * FROM "APRM_Link"."b14-s2012-1"."sdemo"."SPCs"; SELECT * FROM "APRM_Link"."b14-s2012-1"."sdemo"."Units"; SELECT * FROM "APRM_Link"."M201"."Tags"; SELECT * FROM "APRM_Link"."b14-s2012-1"."sdemo"."M201"."Tags"; -- Batches table: SELECT areaname, batchid FROM "APRM_Link"."b14-s2012-1"."sdemo"."Batches" where batchid = 3815; Additional SQLplus query samples can be found in the Aspen Production Record Manager V11 ODBC Keywords: References: Manual available here for download.
Problem Statement: Part of the post-installation of Aspen Process Recipe [APR] and Aspen Process Sequencer [APS] is the DCOM configuration. Not all of the details are fully documented. This solution discusses the required DCOM configuration.
Solution: Here are the recommended DCOM configurations for Aspen Process Recipe and Aspen Process Sequencer. Go into dcomcnfg (start > run > dcomcnfg). Check/configure the following. In My Computer Properties COM Security tab, check the following for both Access Permissions and Launch and Activation Permissions by clicking the Edit Default button, and Edit Limits button: Verify that INTERACTIVE, SYSTEM, EVERYONE, ANONYMOUS LOGON and NETWORK are added and have Local and Remote permissions. If ANONYMOUS LOGON is not desired, the launching user for ATM_ADMIN, ATM_EXEC, ATM_IP21, ATM_SERVICE should be a domain admin user. See the last section of this KB about DCOM Config, on how to add a domain admin account in the identity section. Also verify that AspenRecipeUsers group and/or any other group or individual users that need to access APR/APS are added with Local and Remote permissions. In the DEFAULT PROPERTIES tab; verify that Enable Distributed COM on this computer is checked, and the Default Authentication Level is set to "Connect". Expand My Computer to DCOM Config which displays the applications. Verify there is only one (1) instance of each of the following: ATM_ADMIN, ATM_EXEC, ATM_IP21, ATM_SERVICE. For each applications ATM_ADMIN, ATM_EXEC, ATM_IP21, ATM_SERVICE: Select PROPERTIES. On the GENERAL tab, set Authentication Level to "DEFAULT". On the IDENTITY tab, select "This User" and type in the account information for the account used to start/stop the Aspen Transition Manager service. Restart the Aspen Transition Manager Service. Keywords: Process Recipe, dcom, dcomcnfg References: None
Problem Statement: Is there a way to change the pending file set "path" for a selected range of file sets within one Aspen InfoPlus.21 History Repository?
Solution: The attached Aspen SQLplus query changes the pending file path for several file sets in a repository. Aspen InfoPlus.21 must be running to execute this query. The query asks for a repository name and then the starting and ending file set numbers to change the pending file set path. Next, the query prompts for a case sensitive string in the current pending file path to change and the substitution string. In this way, the query is similar to the Aspen InfoPlus.21 history utility h21chgpaths. After verifying your entries, the query substitutes the new string for the old string in the pending file path for each file set in the selected range of file sets. If you make a mistake, execute the query again to undo your changes. The changes do not take affect until restarting Aspen InfoPlus.21. Stop Aspen InfoPlus.21 after executing the query and move the file set folders from the old location to the new. Then, restart Aspen InfoPlus.21. The pending file path becomes the current file path for all the selected file sets. You could liken this to the h21chgpaths utility often used when moving Aspen InfoPlus.21 from one machine to another. However, this query requires Aspen InfoPlus.21 to be running, while h21chgpaths has to have Aspen InfoPlus.21 stopped. Both the query and h21chgpaths ask for a string to replace in the file set file path and a substitution string; however, h21chgpaths modifies all occurrences of the string in the file config.dat, while the query changes the pending file set path for a selected range of file sets for one specifically defined repository. The query writes to shared memory and therefore Config.dat is not changed until Aspen InfoPlus.21 is stopped. Keywords: Pending file set path h21chgpaths createobject atip21histadmin References: None
Problem Statement: How to configure Aspen Calc security for stand-alone or shared servers.
Solution: Stand-alone: In the stand-alone configuration, users on the same machine have by-pass permission privilege to all Aspen Calc securable tasks. This setup also prevents any other network users from performing the same tasks on the server. When Aspen Calc is installed it does not include a securable object in the AFW Security Manager therefore the default configuration is stand alone. Shared: In the shared configuration, users on the same machine as well as all other network users will have the same restricted access to the Aspen Calc securable tasks on the server, based on the Local Security roles they are assigned to. To configure a shared Aspen Calc server, add the the securable object with the name of the Aspen Calc server. (see below) Users will also need to be placed in roles and assigned access to appropriate securable tasks. Adding Aspen Calc securable object: 1. Open the AFW Security Manager 2. R-Click on Applications and select All Tasks | Import Applications 3. Browse to the AspenCalc\Bin folder and select AspenCalc_Base_security_install.xml 4. Expand Aspen Calc, R-Click on Servers and select New | Securable Object 5. Name the securable object the same as the node name of the Aspen Calc server. 6. R-Click on the securable object and select Properties to add roles to Aspen Calc functions. KeyWords: security calc shared stand alone Keywords: None References: None
Problem Statement: How to display timestamps using a four digit year when selecting historical data using Aspen SQLplus.
Solution: Instead of using an Aspen InfoPlus.21 timestamp format defined by TimeStampFormDef, the CAST statement accepts a FORMAT clause that allows you to determine how a query displays a timestamp. The general syntax of the command is CAST(timestamp as type FORMAT 'template') where timestamp is an Aspen InfoPlus.21 timestamp type is either CHAR or TIMESTAMP template is generated using the items in the following table ITEM DESCRIPTION YYYY Four digit year (e.g. 2018) YY Two digit year (e.g. 18) MM Two digit month number (e.g. 01 for January) MON Three letter month (e.g. JAN for January) DAY Day of week (e.g. MONDAY) DD Two digit day of month DY Three digit day of week (e.g. MON for Monday) HH Two digit hour of day (e.g. 08 for 8:00 AM and 20 for 8:00 PM) MI Two digit minutes within the hour SS Two digit seconds within the minute T Single digit tenths of second Note: The items in template must be upper case. MI is used for minute and MM for month. Therefore HH:MM displays hours:month instead of hours:minute. For example, the query write CAST(CURRENT_TIMESTAMP as CHAR FORMAT 'DD-MON-YYYY HH:MI:SS') produces output similar to 21-MAR-2018 13:04:14 and the query SELECT cast(ts as char format 'YYYY-MON-DD HH:MI:00') as "Trend Time", avg(value) using 'F7.2' width 8 BY name from history where period = 00:01:00 and ts between cast(current_timestamp as timestamp format 'DD-MON-YY HH:MI:00') - 00:10:00 and cast(current_timestamp as timestamp format 'DD-MON-YY HH:MI:00') and name in ('A1113E', 'A1113F', 'ATCAI') GROUP BY "Trend Time" ORDER BY "Trend Time" Desc; produces output similar to Trend Time A1113E A1113F ATCAI -------------------- -------- -------- -------- 2018-MAR-21 13:01:00 31.00 5.00 6.32 2018-MAR-21 13:00:00 36.00 10.00 5.46 2018-MAR-21 12:59:00 27.00 8.00 9.87 2018-MAR-21 12:58:00 35.00 8.00 9.62 2018-MAR-21 12:57:00 32.00 1.00 1.41 2018-MAR-21 12:56:00 31.00 3.00 10.62 2018-MAR-21 12:55:00 34.00 3.00 2.33 2018-MAR-21 12:54:00 29.00 0.00 3.46 2018-MAR-21 12:53:00 36.00 0.00 3.74 2018-MAR-21 12:52:00 31.00 5.00 9.87 Notice the second query used CAST FORMAT to round current_timestamp back to the previous minute. KeyWords: CAST FORMAT Timestamp format four digit years Keywords: None References: None
Problem Statement: Why ATM_Admin is running and taking up 4 tokens on a Transition Manager Server while no active transition presents.
Solution: When Aspen Process Sequencer applications are enabled in the Production Control Web Server (PCWS), it requires ATM_Admin to be running on the Transition Manager server regardless whenever any transition is active or not. As a result, the Transition Manager consumed a standard token count, in this case 4 tokens. This is a by design, expected behavior and will remain the same in future releases. Keywords: AMT_Admin Transition Manager References: None
Problem Statement: This article presents an effective strategy for backing up Aspen InfoPlus.21 history files.
Solution: The knowledge base article How to create a back-up of history files explains the mechanics of creating history file backups. History file backups should be placed on a disk drive other than the one used for storing file sets for Aspen InfoPlus.21 history repositories. So, for example, if history repositories are stored on the E: drive, then you could place your history backups on the F: drive. The history backup drive may be located locally on the Aspen InfoPlus.21 server or on a remote server. You should allocate as much space to store history backups as you do for live historical data. If your history backup drive is located on a remote server, please see the article How to use a Mapped Network Drive by letter rather than fully qualified name in queries. Create a root folder named IP21_Backups on the history backup drive, and then create two sub-folders named Snapshots and History. The Snapshots folder may be used to store database snapshots. See the article Guide to creating an effective strategy for saving Aspen InfoPlus.21 database snapshots. Create three sub-folders in the History folder named System, Active, and ShiftedAndChanged Use the Aspen InfoPlus.21 Administrator to determine the number of history repositories defined for your Aspen InfoPlus.21 server. In this example, there are three. Create a record defined against HistoryBackupDef named HistoryBackup (note: This record may already exist). In the fixed area of HistoryBackup, set the field LOG_FILE to "F:\IP21_BACKUPS\History\h21arcbackup.log", the field SAVE_LOCATION to "F:\IP21_BACKUPS\History\System", and the field POST BACKUP COMMAND to "%h21%\etc\system_cleanup.bat F:\IP21_BACKUPS\History\System 5". Also, set the field NUMBER OF REPOS to the number of repositories defined for your Aspen InfoPlus.21 server. Open the repeat area NUMBER OF REPOS. Select the name of each repository in the REPOSITORY_NAME field, enter YES in the field SAVE ACTIVE, and the location where the active file sets will be saved in the field ACTIVE LOCATION. In the example, the active file sets will be saved to the folder F:\IP21_BACKUPS\History\Active. Find the column LAST ACTIVE COMMAND and enter "%h21%\etc\active_cleanup.bat F:\IP21_BACKUPS\History\Active reposname 5" where reposname is the name of a repository. This activates the batch procedure active_cleanup.bat to purge the number of saved active files sets to 5. You may adjust the number of active file set backups by changing the number, Find the columns SAVE SHIFTED and SHIFTED LOCATION. Enter YES for each repository in the field SAVE SHIFTED and the location of the folder ShiftedAndChanged in the field SHIFTED LOCATION. Find the columns SAVE CHANGED and CHANGED LOCATION. Enter YES for each repository in the field SAVE CHANGED and the location of the folder ShiftedAndChanged in the field CHANGED LOCATION. Test the history backup configuration by entering YES in the SAVE NOW field in the fixed area of HistoryBackup. Finally, enter a scheduling interval in the field RESCHEDULE_INTERVAL and the first time history backup is supposed to run in the field SCHEDULE_TIME. Keywords: None References: None
Problem Statement: The Aspen SQLplus SYSTEM command allows a query to execute operating system instructions such as 'DIR.' If the 'z' drive is local to the Aspen InfoPlus.21 machine, then the command system 'dir z'; would return the same information as if the directory command were executed at a Windows command prompt. However, if 'z' is a network drive, Aspen SQLplus may return the following error 'The system cannot find the path specified'
Solution: There are two ways to use a 'non-local' drive in an Aspen SQLplus query. 1) Rather than referencing a mapping letter, use the fully qualified name such as :- System 'dir \\computername\sharename'; 2) Execute a query line with the 'net use' command before referencing the network drive as follows :- System 'net use z: \\computername\sharename'; (where any free drive letter could be used) From then onwards you can use 'z' in as many queries as you want to without having to repeat the 'net use' statement. The network assignment remains available to all Aspen SQLplus queries even after restarting Aspen InfoPlus.21 as long as you do not stop the Aspen InfoPlus.21 Task Service. To re-establish the network assignment to the z: drive after rebooting the Aspen InfoPlus.21 server, so that all of the originally created queries continue to work, create a QueryDef record (e.g. AssignZdrive) where the query_line says :- system 'net use z: \\computername\sharename ' Then, Set the field AssignZdrive #WAIT_FOR_COS_FIELDS to 1(one), Set the field AssignZdrive WAIT_FOR_COS_FIELD[1] to "TSK_SAVE LAST_LOAD_TIME" and AssignZdrive COS_RECOGNITION[1] to all. This activates the query AssignZDrive when Aspen InfoPlus.21 starts, making the network drive available to all Aspen SQLPlus queries. Keywords: net use path not found assign network drive Non-Local Drive system cannot find the drive specified References: None
Problem Statement: Getting a list of A1PE users from Aspen Process Data Rest
Solution: There is a way to see which users are currently using Aspen One Process Explorer (A1PE). It can be done through Aspen Process Data REST Samples page. Please follow the steps below to see how it's done. 1. Open your web browser on the A1PE web server and paste the following URL link: http://localhost/ProcessData/Samples/Sample_Home.html 2. Hit enter and new page will be shown | click on License 3. Choose “View” for function | Format Type: “Format” | Request Type “POST” 4. Click “Issue Request” And now you will be able to see which users are currently using A1PE and the time they’ve been using the application. Also if the same user has opened A1PE in two different internet browsers, it will be shown in the table (see example below) Keywords: Users A1PE Aspen Process Data Rest License References: None
Problem Statement: How to use Markers with Tag Values in aspenONE Process Explorer?
Solution: One of the popular features of “scooters” in Aspen Process Explorer was the ability to see tag values at the Marker point on the chart. Click and hold anywhere in the plot area will place a new Marker on the chart. Click the Marker repeatedly to toggle the display mode: either to show just the annotation line, just the timestamp (displayed at the top of the plot area) or timestamp along with all the tag values. Click and hold until the Marker background of the flags turns blue and the Marker can be repositioned. If you hold but do not reposition the Marker you will then cause a Marker prime to be created that you can reposition to represent a period of time of interest. A Marker can be “scooted off” the chart. Keywords: scooter A1PE References: None
Problem Statement: Crude Inventory import fails from time to time on some user’s machines. There is no error messages showing but data from Excel are not populated in APS Audit Inv Screen.
Solution: You can find basic recommendations for Oracle db configuration to avoid potential issues with EIU by the link https://esupport.aspentech.com/S_Article?id=000062200 . Additionally, in case of issues with Crude Inventories import please consider test of CRDINV_WRITE_USING_BATCH configuration. “CRDINV_WRITE_USING_BATCH” keyword allows APS to write data in bulk from text file into import tables using SQLldr. With that flag off, APS will not use SQLldr and write records one by one. The true solution for EIU to work is setup SQLldr correctly for problem users. Workaround is to turn the keyword off (set N in table CONFIG). In order to add that row with N value to the table CONFIG you could execute the query INSERT INTO CONFIG(ID,VALUE_) VALUES('CRDINV_WRITE_USING_BATCH','N'). Keywords: EIU CONFIG crude inventory import Oracle db References: None
Problem Statement: Which are different Web browsers supported in Aspen Multi-Case V12?
Solution: User can use one of the following web browsers for Aspen Multi-Case. Google Chrome Microsoft Edge (Version 79 or higher) Note: Internet Explorer is not supported. Keywords: Aspen Multi-Case, browser References: None
Problem Statement: How to export an Aspen Multi-Case project?
Solution: The project configuration and results are saved in a database. To share projects between different users, you can export projects from the database and import them into another database as needed. To export an Aspen Multi-Case project: On the Projects view, in the row associated with the project that you want to export, click , and then select Export Project. The Save As dialog box appears. On the Save As dialog box, navigate to the desired location and specify the desired File name. Click Save. The project is exported with a .mcz extension type. Keywords: Exporting projects, mcz, etc References: None
Problem Statement: Which are different files supported in Aspen Multi-Case V12?
Solution: Different files supported in Aspen Multi-Case V12 are as follow, The following HYSYS file types are supported: .hsc .hscz The following Aspen Plus file types are supported: .bkp .apw .apwz Limitations / Exclusions The following Aspen HYSYS and Aspen Plus cases cannot be attached to Aspen Multi-Case projects: HYSYS cases in Dynamics mode Cases that contain Equation Oriented (EO) Sub-Flowsheets / modeling Aspen Plus cases that contain Calculator blocks with Excel as the Calculation method Note: For Aspen Plus files that use Aspen EDR (Exchanger Design & Rating) capabilities, only .apwz files can be attached to Aspen Multi-Case projects. Additionally: Aspen Multi-Case does not have access to Activated Economic Analysis or Activated Energy Analysis data. However, you can still create an Aspen Multi-Case project for a file containing this data. Safety Analysis data is not included in Aspen Multi-Case calculations. However, you can still create an Aspen Multi-Case project for a file containing this data. Keywords: Aspen Multi-Case, Supported files, etc References: None
Problem Statement: Aspen HYSYS allows the user to create their own utilities but what is the meaning of all the parameters?
Solution: These are the definitions of the parameters used in the Process Utilities Manager: Inlet Temperature: It is the temperature of the inlet flow into the heat exchanger. Outlet Temperature: It is the temperature of the outlet flow of the heat exchanger; where the outlet temperature is always different than the inlet temperature. HTC (heat transfer coefficient): It is the average heat transfer coefficient of the utility and it depends on the properties of the utility, normally the vendor supplies it, but if the user doesn't have it, the user can activate the "Calc HTC" box and this will be calculated from the utility properties (ie Viscosity, Density, Cp) Cost Index: The cost index represents the cost of supplying (for a hot utility) or removing (for a cold utility) a unit quantity of energy. ARH (Application Range High) & ARL (Application Range Low): These are temperature calculations consider the overall system to identify the feasible temperature range the process utility can serve. Where: One limit is identified from the thermodynamic limitation (i.e., the approach temp must be > the specified delta T of the utility). The other limit is obtained using the Exergy principle. The low temperature hot utility is preferred when more than one hot utility is feasible. Similarly, the high temperature cold utility is preferred when more than one cold utility is feasible. DTmin : It is the minimum approach temperature required when the utility is used to supply or remove heat. According to thermodynamic feasibility criteria, the Delta T min value must be greater than 0°C, but for practical reasons its value should be greater than 1 °C. Viscosity: It is the Dynamic Viscosity of the utility. Conductivity: It is the Thermal Conductivity of the utility. Eff.Cp: It is the Heat capacity of the utility. The criteria used for these values are at normal operating conditions of the utility, in other words, the inputs are the average values of the utility Keywords: Process Utilities Manager, Parameters, Values, Average References: None
Problem Statement: I'm getting an "Exception has been thrown by the target of an invocation” message when running my Aspen FLARENET case. What does this error mean?
Solution: The error message usually indicated that Access OLEDB for whichever version of office is running on the m/c 32/64 bit need to be reinstalled here. Please try the following workflow: 1. Uninstall the current access database engine from Program & Features in control panel 2. Download the appropriate (32/64 bit depending on which version of office you have) engine from https://www.microsoft.com/en-US/download/details.aspx?id=13255 3. Install the access database engine Keywords: None References: None
Problem Statement: AspenOne Process Explorer (A1PE) not working or giving error the “Invalid Field Type” for tags to be Plotted or Admin page gives error "Process Data error 401".
Solution: Enabled Windows Authentication in IIS - Sites - Default Web Site - ProcessData. Enabled "Use the Process Data Service" in AT PD Rest Config (C:\inetpub\wwwroot\AspenTech\ProcessData\AtProcessDataREST.config) from "False" to "True". Flush Process Data Cache (http://localhost/ProcessData/Samples/Sample_Admin_Query.aspx), IISreset. execute an “Admin Cache Flush” of Process Data from the Samples Page. http://localhost/aspentech/ProcessData/Samples/Sample_Home.html Under the Admin Queries category, then Click the “Admin” hyperlink. Next Set the “Request Type” to use the “Admin/FlushCache” selection. Then Click the “Issue Request” Button. Then perform IIS reset by giving the Command as iisreset in CMD as shown in figure : Keywords: A1PE ,Invalid Field Type References: None
Problem Statement: When configuring a new Cim-IO Interface the CimIO_Logical_Device.def file is not updated.
Solution: The Concept of a logical device is only used by a Cim-IO Client and not used by the Cim-IO Interface. On a Cim-IO client system like Aspen InfoPlus21 the CimIO_Logical_Device.def file is used by the Cim-IO client applications to identify the Cim-IO interface it connects to. In the case of Aspen InfoPlus21 there is a Logical Device record defined by IoDeviceRecDef. When the Cim-IO Client tasks associated with the Logical Device record start they use details in the CimIO_Logical_Device.def to provide the node name and interface service name for the logical device. Example CimIO_Logical_Device.def Device1 Node1 Interface1 Device2 Node2 Interface2 When a new Cim-IO connection is created the Cim-IO IP.21 Connection Manager will configure the CimIO_Logical_Device.def along with creating the Logical Device record and Cim-IO Client records. (TSK_M_Device1, TSK_A_Device1, TSK_U_Device1) Special Consideration; If using the Cim-IO Test Client “CimIO_T_API” on the Cim-IO Interface server the CimIO_Logical_Device.def must be configured so the CimIO_T_API can connect to the interface. Use the option “2-Test Adding a Logical Device” in the CimIO_T_API to add the required entries. KeyWords: Logicaldevice Logical Device Interface Keywords: None References: None
Problem Statement: When using the CIM-IO Test API to read an IP.21 tag through the IP.21 OPC DA server, if the tag name includes a dot, it will return "GET successful" and "Bad Tag". How to resolve this issue?
Solution: There are two solutions. 1. Add double quotes to the IP.21 tag name that includes a dot, and the CIM-IO Test API will return "Get successful" and "Status is GOOD". 2. Use the "Cim-IO for IP.21" interface instead of the "Cim-IO for IP.21 OPC DA" interface to build the CIMIO logical device. By doing this, the CIM-IO Test API will return "Get successful" and "Status is GOOD" for the IP.21 tag that includes a dot without adding double quotes. KeyWords CIM-IO Test API Bad Tag IP.21 tag name that includes a dot Keywords: None References: None
Problem Statement: After installing Aspen Engineering Suite, the OneDrive storage was maxed out. This can happen because Aspen Exchanger Design & Rating (EDR) installs software specific databases in the user file space, which may be the cause for users to quickly run out of cloud storage space. How can this location be changed to avoid that issue?
Solution: There are two options to change the location where Aspen EDR databases are stored. The easier option is to change it from Aspen EDR itself, by manually changing this location. Follow the steps below: Open Aspen EDR Vx.x Go to File | Options | Files Change the database folder to any other. Close Aspen EDR Re-open it Note: Each user would need to do this as the options are per user/ per machine. If by any reason you cannot open Aspen EDR, you can change the database folder location by modifying the user.config file located in C:\users\"username"\AppData\Roaming\Aspen_Technology,_INC\"AspenEDR.exe_Url_......\37.0.0.380\" to create a custom database location on the local C: Drive (or other). The user.config file should be edited as below: "<?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <section name="progSettings" type="AspenTech.EDR.Components.Configuration.ProgSettingsConfigSection, AspenTech.EDR.Components.Configuration, Version=37.0.0.380, Culture=neutral, PublicKeyToken=null" allowExeDefinition="MachineToLocalUser" /> </configSections> <progSettings> <settings customizedDatabaseFolder="C:\EDRDB" /> </progSettings> </configuration>" Note: This can only be modified after installation, as this folder is created only during the run. In case you are using a script to massively change this folder for all users at once, these are the steps you should follow: 1. Run Aspen EDR 2. Close it, 3. Change the DB location to the one you would like in the created user.config file 4. Move the existing DB files to the new location 5. Re-run Aspen EDR 6. Close it Keywords: Aspen EDR Database; File Location; Cloud Storage; References: None
Problem Statement: Why in a pipe network in dynamics mode the elevation changes are not taken into account by default?
Solution: As mentioned in the article Is it possible to include static head contributions in dynamic simulations? the user can include the static head contributions in Dynamic Mode, but the Pipe Segment model is different, by default Aspen HYSYS Dynamics doesn’t consider the elevation change in the Pipe Segment Rating Tab. The user needs to define the Base Elevation Relative to ground to all the unit operations, including mixers and enable the static head in the integrator. In other words, let's say that the pipe comes out at a height of 10 ft and that is why the manifold (mixer) is logical to be at the same height. Also, that is why the other pipes have an elevation of 5 ft because if the user sets that they start at 5 ft now these pipes can reach the 10 ft elevation of the manifold as shown in the following images. Keywords: Base Elevation relative, Elevation Changes, Static Head Contributions, Manifold References: None
Problem Statement: What is the registry variable to control the application to look only for a standalone license file?
Solution: If a standalone user wants to have the application license check out only from his standalone license file, then the user can set the registry variable “no-net” to 1 instead of default 0. With the default value “0“, the machine will look for a standalone license first, and then it will look for a network license. By setting this registry variable parameter to 1, the machine will only look for a standalone license file. Registry path: Computer\HKEY_LOCAL_MACHINE\SOFTWARE\AspenTech\SLM\Configuration\no-net Keywords: no-net, license only from standalone license key, registry References: None
Problem Statement: What are the infrastructure requirements to transfer logs files to Aspentech via secure FTP (sFTP) ?
Solution: To transmit log files via secured FTP the standard TCP Port #22 (SSH) must be open in all the filtering devices (routers, firewalls,..) between the machine where the Aspen Upload Tools (AUT) is being run and the Internet gateway. If you know you have such filtering devices in your enterprise network (or if you are unsure), please contact your local IS team to verify the filtering settings and change the configuration where necessary. The connectivity to our sFTP server can be verified in 2 different ways : 1/ From the operating system Open a DOS windows and run the following command : Telnet alcsftp1.aspentech.com 22 If the connection is successfully, it will bring up a prompt as "SSH-2.0-OpenSSH_for_Windows_7.7" This is a convenient test in the sense that it can be done even before installing the Aspen Upload Tools. 2/ In the Aspen Upload Tools Use the "Test Connection" button on the sFTP tab of the Aspen Upload Tools Note : In some companies the SLM License Manager is considered a business critical service and security policies do not allow opening an external connection to/from this server. To workaround this constraint, the "Network Share" option of the Auto Upload Tools makes it possible moving the log files internally to a less critical machine where an external sFTP link can be opened. Keywords: firewall, router, sFTP, logs, upload, connectivity References: None
Problem Statement: Using WRlfTool to manually configure SLM redundant servers
Solution: Open WRlfTool.exe from <C:\Program Files (x86)\Common Files\AspenTech Shared\SLM Administration Tools> location and follow the below procedure to manually configure / modify the redundant license servers. File | New Add the license server information <hostname and IP address> for all the three license servers using the Add or Delete Server option Add the redundant license file provided by AspenTech in the bottom window File | Save Save these details in the configuration file (the default name is lservrlf). Place the lservrlf file into the license server's directory <C:\Program Files (x86)\Common Files\SafeNet Sentinel\Sentinel RMS License Manager\WinNT> on all the redundant servers. Restart all the redundant license servers Keywords: Redundant license, WRlfTool, SLM, Manual configuration References: None
Problem Statement: How does the License Checkout system work for Aspen ProMV? When using Aspen ProMV it can be useful to understand how many licenses are used and how many are used by the different areas of the software, together with understanding how many licenses different models use.
Solution: In the table below you will be able to view a summary of the different license names together with a description of how many licenses are used by what Area and activity of the many ProMV tools. Application License Name Area of use Number of licenses checked Desktop SLM_RN_APM_PROMV_STD continuous/batch process 1 license per running application instance SLM_RN_APM_PROMV_BATCH batch process 1 license per running application instance Online model execution SLM_RN_APM_PROMV_ON_BAT batch process 1 license per running model SLM_RN_APM_PROMV_ON_CON continuous process 1 license per running model Online User based access SLM_RN_APM_PMV_CFG_CON continuous process 1 license for 1 concurrent access user session in editing/updating environment SLM_RN_APM_PMV_USER continuous/batch process 1 license for 1 concurrent access user session in ProMV Online (editing/updating environment, runtime, diagnostic view) Keywords: SLM Tokens Checkout ProMV Online ProMV Online Continuous ProMV Batch Online ProMV Desktop References: None
Problem Statement: When trying to run docker-compose set up in the Aspen Mtell Maestro installation, you might get one of the following error messages. These errors prevent you from finishing the Maestro installation. Error for builder Cannot create container for service builder: Conflict. The container name "/builder" is already in use by container [CONTAINER STRING]. You have to remove (or rename) that container to be able to reuse that name. Error for online Cannot create container for service builder: Conflict. The container name "/online" is already in use by container [CONTAINER STRING]. You have to remove (or rename) that container to be able to reuse that name.
Solution: 1. Open docker-compose.yml file in Notepad. 2. Delete every line that starts with container_name. 3. Save and close docker-compose.yml. 4. Open Command Prompt (run as Admin). 5. Navigate to folder containing the docker-compose.yml file. To do this, type cd followed by the file path (for example, cd C:\ProgramData\AspenTech\Aspen Mtell Maestro) and press Enter. 6. Enter the command docker system prune -f and press Enter. a. This command cleans up Docker b. This command might take a couple of minutes 7. Enter the command docker-compose down and press Enter. 8. Enter the command docker-compose up and press Enter. 9. If Step 8 fails, then update the yml file by changing the value of replicas to be 1 for every service, and try Step 7 and 8 again. Keywords: Docker Docker Compose Maestro deployment References: None
Problem Statement: In Aspen Mtell System Manager, Asset Sync and/or Work Sync have stopped running automatically.
Solution: One potential solution is to reset the services associated with Asset Sync and/or Work Sync: 1. Type in Services in the Windows search bar of the Aspen Mtell application server. Select Services from the list of results. 2. In the Services window, right click on either Aspen Mtell Asset Sync or Aspen Mtell Work Sync, depending on which process is not running automatically. Press Restart. 3. Repeat step 2 for the other service if you need to restart both Asset Sync and Work Sync. 4. Back in Aspen Mtell System Manager, set the Asset Sync / Work Sync Run At time to one minute in the future (for example, if the current time is 12:30 PM, set the Run At time to 12:31 PM) 5. Wait one minute and confirm that Asset Sync / Work Sync starts automatically. 6. Change your Asset Sync / Work Sync settings back to what you previously had. 7. Repeat steps 4 - 6 for the other service if you restarted both Asset Sync and Work Sync. Keywords: MDM Settings Work orders Asset hierarchy Not updating SAP EAM References: None
Problem Statement: When opening the History Plots from A1PE for Aspen Watch tags- AW_VIND_M Error. The history plots of variables in PCWS are not displayed when set as A1PE, but displayed when set as web.21 HPT. Error displayed at value column as: Map Name : AW_VIND_M is invalid
Solution: Restart the AspenProcessDataAppPoolx64 on the webserver from IIS manager : Key words: History Plot ,Aspen watch Keywords: None References: None
Problem Statement: Why the operating point is beyond the compressor curve speed?
Solution: The system decides the pattern by extrapolating the graph behavior. At the end, your operating point will be based on the flow. Before you add the last curve, the operating point will go beyond that, the system will still extrapolate the graph, but there will be some issues with the accuracy. If we have the operating point in between the curves (after you add the last curve, e.g. Speed=1100 rpm), the system will interpolate it on the graph for that particular operating level. Keywords: Aspen HYSYS, Operating Point, Compressor Curve References: None
Problem Statement: Can Aspen HYSYS predict the Wax Appearance Temperature (WAT)?
Solution: In crude or heavy oils, cloud point is synonymous with wax appearance temperature (WAT). The cloud point of a nonionic surfactant or glycol solution is the temperature at which the mixture starts to phase-separate, and two phases appear, thus becoming cloudy. Below steps shows how to add the Cloud Point: Go to the Properties tab of the stream and click on the green plus icon 'Append New Correlation'. Select Cloud Point which under Petroleum option and then click Apply. The Cloud Point correlation will be added at the bottom of the properties list of the stream. Keywords: WAT, Temperature, Aspen HYSYS References: None
Problem Statement: If tag name exceeds 49 characters, the collect.exe tool will crash upon trying to read the input file. Collection will not be successful.
Solution: There is a hard limit on character length for the tag names used in the collect input file. This limit is 49 characters, tags which names have 50+ characters will cause the collection process to end abruptly. This limitation includes the tag extension. This limitation is only present for data collection using the Aspen Advanced Control Collect Tool. The workaround for this issue is to use other collection tools such as: Aspen Watch or DMC3 Builder Data collection. Another workaround is to use tag aliasing, this solution is different depending on the OPC server and it will require documentation of the specific server type. Keywords: Collect Tag name References: None
Problem Statement: How to resolve the issue PIMS model runs fine in AO but crashes in DR?
Solution: The crash is caused by an array out of bounds in the DR MG code in Fortran. Doubled the model table limits (rows=10000, columns=4000, rows*columns=1000000) and the crash will go away. You also need to increase the LP matrix nonzeros to avoid an MG error. Set it to 1000000. There are no plans to fix bugs and rebuild the DR Fortran code at this time. Please update the Size Limits as a good workaround. You would need to do this even if we added code to prevent the crash. Keywords: None References: None
Problem Statement: Can users add header bias to derived properties in ABML/UBML correlations?
Solution: Currently MBO does not support this. If the property is derived, complex, or interaction type properties, header bias could not be added to them. Keywords: None References: None
Problem Statement: When using "Get Calculated Values", we run one-minute averages for a week for a number of tags. After this, a calculation is made in a new column where it checks if the tag value exceeds a setpoint. This column is strapped as a table and then we filter out this table for only rows that exceed the setpoint. Excel initially filters out the values correctly, but then the Add-in recalculates the spreadsheet with the filtered-out rows, but with new starting timestamps (which changes the data). For example, the report finds about 20 instances or so on 5/8/20xx where the setpoint is exceeded. This is marked correctly in the Excel column. When we filter the column for only values that are marked, the correct rows are originally filtered out, but then the Add-in recalculates these rows at 5/9 midnight and gives the data from that point, overwriting the data we want. This knowledge base article explains the reason for it and shows how to resolve it.
Solution: The problem is caused by the Microsoft Excel filter action, which triggers the Add-in formula to recalculate the data. Aspen Process Data Add-in is not able to handle correctly the MS Excel’s filtering action on the recalculation of the spreadsheet and Microsoft is not expected to address this issue. The workaround is to create data links of the Add-in output on a separate worksheet. Apply the filter on the data link. The MS Excel filtering won't trigger the Add-in formula to recalculate. To create data links, select the Add-in output range, choose Copy, then go to a different worksheet, select the range to copy and right click to choose Paste Special. On the Paste Special dialog, click the paste link button. Keywords: References: None
Problem Statement: Aspen Fidelis Reliability Visual Studio Dependency
Solution: If a customer anticipates the need to write custom user coding in VSTA (Visual Studio Tools for Applications), they will need to purchase a commercial license from MSDN which will require each customer to request an individual quote. For a training class, the community edition of visual studio may be used. Keywords: Visual Studio, VSTA, Write Key Routines References: None
Problem Statement: An example of an override controller in Aspen HYSYS Dynamics.
Solution: The Example is an override control example of a distillation column reboiler. This is implemented through a minimum selector, between the Tray 20 temperature control and the sump level control OP. In this case the sump level controller "LIC-101" is operating the valve at around 40-50% openings, and the steam control valve to 25%. When the level in the sump is falling the controller "LIC-101" will start the "VLV-108" to close. However there might be cases where the level is still dropping to dangerous levels, for example because the pressure is falling down. To prevent this, a minimum selector between the minimum of the Tray 20 - Temperature Controller OP and LIC-OP (taken directly from the valve) is used. In this case if the sump level control OP is very low the controller will cut back the steam flow to the reboiler. Keywords: Override controller, selector controller References: None
Problem Statement: Is there a way to generate a table to report internal energy as a function of temperature and pressure?
Solution: Internal energy can be reported using Aspen Properties either for a pure component (U) or for a mixture (UMX). The file attached illustrates how to create a generic analysis to tabulate the internal energy of Ethane with respect to pressure and temperature changes. This example can be adapted to your needs. Here are the steps to reproduce the example file: Create a blank Aspen Properties file Add the components of interest Select the appropriate property package based on your component list From the Navigation Pane, create a new Analysis from the Analysis folder Select the GENERIC type Define your system composition Select the variables you would like to vary/fix (e.g. Temperature and Pressure) Add the Range/List for each adjusted variable from item 7 In the tab Tabulate, create a Property Set containing the variable Internal Energy Run the simulation The results for the analysis can be seen in the tab “Results” – you can also plot the results using any options available in the ribbon. Note: if the analysis results empty, you might want to check if the input specifications are valid (e.g. phases valid, range of the adjusted variables, property package validity, etc.) The example file attached to this article was created using Aspen Properties V9. Keywords: Aspen Properties; Internal Energy, HYSYS; Aspen Plus; References: None
Problem Statement: In Aspen Plus, is it possible to create a user-defined variable which is a combination of a variety of specified and/or calculated variables available in the simulation? This variable would either be reported or further used as input to other blocks or design specs and sensitivity analyses. How can the user create a user-defined variable in Aspen Plus using a calculator block?
Solution: When working with a calculator block, the user can define a new variable or type parameter and attribute values or equations to this parameter. For example, let's say the user wants to create a variable called myvari, representing a stream mass flow multiplied by a constant. The user-variable should first be declared as a parameter (figure 1) and then set to the user’s need, using either Fortran Language or Excel (figure 2): FIGURE 1 Note: If the user wants the variable to be visible to another block, the information flow should be marked as “Export variable”. Parameter variables can used in Design Specifications or Sensitivity Analysis. They cannot be accessed in a block directly, but a block variable can be equated to a parameter. Local parameter variables cannot be used outside the Calculator block. The units of the user-variable may be set during the parameter definition (figure 1). MASS-FLOW (or other) units may be specified. There is also the option to use a DUMMY unit or dimensionless if the user variable requires it. FIGURE 2 Once the simulation is run, the user variable will be calculated and reported in the “Results” folder of the calculator block. The file for this example (created in V10) may be downloaded from the article attachments. Keywords: Calculator Block; user-defined variable; fortran; References: None
Problem Statement: When creating an mMDM Hierarchy using Hierarchy Creation Wizard, after configuring First Level Settings, the following error is shown: Could not find HierarchyLevelDefinition id=XXXXXX@XXXX-XX-XXTXX while evaluating RootLevel on item
Solution: In Aspen mMDM Editor verify that the records under Hierarchy Levels are not grayed out. If the records are grayed out, this means that the Active Date is set up in the past. Go to View > Active Date and select the option Now and then click OK to commit the change. After this, the records from Hierarchy Levels will no longer appear grayed out and you should be able to finish configuring your Hierarchy. Keywords: mMDM Hierarchy Hierarchy Levels References: None
Problem Statement: Why is the Mach Number always zero at the flare tip outlet in Aspen Flare System Analyzer (AFSA)?
Solution: The Mach Mumber calculation at the outlet of the flare tip is related to the flare tip temperature calculation. AFSA calculates the outlet flare tip temperature by first calculating the Enthalpy at Stagnation Point of the fluid at the exiting flare system. For this calculation, the program assumes that the velocity of the gases after the exit from the pipe system and into the atmosphere is zero, hence reporting this velocity value at the end of the tip and once it has found the stagnation enthalpy value, it calculates the temperature based on a PH Flash. So, as the Mach Number formula is Because of this assumption, the mach number at the outlet is always zero. Keywords: Mach Number, Zero, Flare Tip, Outlet, Fluid Velocity References: None