question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: What Remote Desktop services are supported by Aspen Fleet Optimizer?
Solution: Remote Desktop Services (RDS) can be used with Windows Server 2008 R2, Windows Server 2012 R2 and Windows Server 2016 to allow multiple remote client workstations to access and use Windows server desktops and applications. Remote Desktop Services and users connecting through RDS must be installed on separate computers. If Fleet Optimizer is installed on a client workstation, RDS features will not be exposed. If Fleet Optimizer is installed on a RDS server with a valid license file, RDS features are exposed to all users connecting to that server. Keywords: None References: None
Problem Statement: What setting controls the ability to re-export orders in Aspen Fleet Optimizer?
Solution: The setting is called allowReExport and it is stored in the customize.ini file under Winopt. Use this option when the scheduling circumstances have changed and the existing schedule needs to overwritten with a new schedule based upon the latest circumstances. Assume 10 trucks were on duty when a schedule was exported. Then, during a later shift, three trucks are taken off duty (after the schedule was exported). You may want to take the trucks off duty and then re-optimize with the seven trucks now on duty. Re-exporting the schedule allows you to overwrite the existing plan. Keywords: None References: None
Problem Statement: How is Aspen Fleet Optimizer long haul logic turned on?
Solution: If you turn on long haul logic a dispatcher is able to pre-define the shift type with ending time up to 500 hours – this is approximately 20 days. A dispatcher can turn on or turn off truck shift either manually or through truck schedule service – but it would be your responsibility to make sure the long-distance truck still on the road should not be turned on for next shift optimization. This option determines if the Long Haul feature is enabled when optimizing. =0 (Long Haul Disabled) =1 (Long Haul Enabled) Keywords: None References: None
Problem Statement: What are the factors that make up a Aspen Fleet Optimizer optimization?
Solution: Optimization options refer to the Fleet Optimizer processes that are used to make your system as effective as possible. These options use specific algorithms to calculate the most effective solutions. Ultimately, optimization supports your installation in reducing costs and increasing profit. Fleet Optimizer optimization features include: Best Buy & Toll Optimization – For net buyers and purchasers of product. Cloud setup – Groups clusters in a cloud to provide broader split shipment flexibility. Cluster Setup – Groups customers that are geographically close together in clusters to support shipment flexibility. Up to six customers can be assigned to each cluster. Group Setup – Groups terminals by marketing region for shipment flexibility. Load Sizing – Manages load size to achieve preferred conditions. Proportionality – Maintains shipments that are proportionate to average sales by product. Terminal Setup – Defines terminals for delivery assignments. Transport Setup – Defines transports for delivery assignments, determining schedules, and determining transportation costs.  Truck Matching (recursive and iterative) – Enhances Resource Scheduling Optimization (RSO) by allowing two methods of optimization (recursive and iterative) based on the number of product and compartment combinations on a given order. Weight-By-Axle Optimization – Configure transports using weight parameters for each axle on a transport (instead of by transport compartment fill lines). Weight Optimization – Uses various factors (such as specific gravity, density, transport weight, and so forth to optimize the amount of product that can be loaded onto a transport. Keywords: None References: None
Problem Statement: What type of truck matching does Aspen Fleet Optimizer use?
Solution: By default, Fleet Optimizer uses a recursive method when adding orders to a transport. This method assigns loads to transports sequentially until the maximum number of loads to sequence for a given delivery (INI setting MaxLoadsToSequence) limit is reached, at which point the optimizer switches to a faster method – iterative transport matching. Iterative Truck Matching is an optional feature designed to enhance optimization performance by limiting the number of possible product/compartment combinations that the RSO must calculate. Consider using Iterative Truck Matching if all of the following conditions apply: You are dispatching more than 100 transports per shift. Some of your transports contain more than six compartments. RSO performance is slow. Iterative transport matching is used when the system encounters a case where both the number of compartments on a transport (INI setting NumCompart) and products in an order (INI setting NumProducts) are met or exceeded. Keywords: None References: None
Problem Statement: Is there are any option to setup the air cooler such way that the outside air flow rate is adjusted to match the required outlet process temperature?
Solution: It is not possible to perform such specification in the Exchanger Design and Rating (EDR) program. But, when we integrate HYSYS with the EDR program, then we can use the Adjust tool in HYSYS In the attached example file (V10), the air cooler in the flowsheet is using the EDR model. The Adjust operator used in the flowsheet has been created to manipulate the outside air flowrate to achieve a target temperature for the outlet process stream. When we run them model and the Adjust operator converges, we can view the final value for the air flowrate in air cooler Performance tab and also in the Monitor tab of the Adjust operator. KeyWords: Air Cooled Exchanger, Adjust, outside air flowrate Keywords: None References: None
Problem Statement: What is the difference between a guaranteed truck and a bill by shipment truck in Aspen Fleet Optimizer?
Solution: If Guarantee is selected for a transport, Fleet Optimizer assumes that the labor cost for that transport equals the total number of hours available for that transport multiplied by the cost per hour or distance. Since the transport is marked as Guarantee, the labor cost always takes into account the total hours available. If there is a pre- or post-trip time, Fleet Optimizer incorporates these into the total available hours. If Bill by Shipment is selected, a transport is paid on a load-by-load basis and not guaranteed a certain number of hours. There is no additional cost for not fully utilizing a transport that is marked Bill by Shipment. A transport marked Bill by Shipment can use any cost structure method for the labor, overtime, or operating cost type. Historically, the most common cost type for a Bill by Shipment transport has been a Cost per Volume (CPV) chart defined as the operating cost of a transport. Keywords: None References: None
Problem Statement: How is fan power calculated in the Air Cooled Exchanger program?
Solution: The ideal fan shaft power is computed from the computed static pressure loss for the air-cooled heat exchanger and the known volumetric airflow through the fan. The fan driver power is then evaluated at the summer/winter design temperature so that the driver is adequately sized. The fan static efficiency is used in selecting the proper blower for any given installation. The static efficiency neglects the velocity pressure imparted to the air, and considers only the volumetric flow delivered against the static pressure. Static efficiency = Static air HP output/Brake HP Input. In Exchanger Summary/Fan, the 65% is a default value provided by the program for the static efficiency, which can be changed by the user. The summer fan motor power is calculated from: Ws = (The fan pressure rise) * (actual volumetric flowrate through fan) / (the combined efficiency of the fan and drive system) The fan power for winter operation is calculated from; Ww = Ts / Tw * Ws Ts and Tw are the summer and winter exchanger X-side absolute temperatures respectively at the inlet for forced draught fans. The fan power for winter operation is not calculated for induced draught fans as the fans move air at the bundle outlet temperature. The fan driver power is evaluated at the winter design temperature so that the driver is adequately sized. The fan driver rated power for forced draught unit is Where PDR = the fan driver rated power (KW) PSA = the actual fan shaft power (KW) ET = the efficiency of the drive, direct, V-belt or gear TS = the summer design air temp. (K) TW = the winter design air temp. (K) The fan driver rated power for induced draught units is Where PDR = the fan driver rated power (KW) PSA = the actual fan shaft power (KW) ET = the efficiency of the drive, direct, V-belt or gear TS* = the mixed mean air temperature at the finned tube bundle outlet for summer design conditions (K) TW* = the mixed mean air temperature at the finned tube bundle outlet for winter design conditions (K) For more information, please refer to the HTFS report DR61. Keywords: Air cooled exchanger, Fan power calculation References: None
Problem Statement: This Knowledge Base article provides steps to resolve the following error: There was an error opening the connection to source db server. Error: -2147217843 - Login failed for user 'xyz'. .. which may be encountered when setting up a connection from the Aspen Extractor to the source extraction database such as DeltaV Batch Historian hosted in Microsoft SQL Server.
Solution: Both the User ID and Password are case sensitive and need to be entered correctly for the case they are in the MS SQL Server database under Security | Logins. Keywords: Error: -2147217843 2147217843 References: None
Problem Statement: Aspen Search may produce unexpected results when tag name contains (.) periods, such as ‘.FIC502.’ and ‘.FIC502F.’ . Examples: 1. When typing .FIC502. in the Add to plot/list box, the Search will find tags containing both .FIC502. and .FIC502F. 2. When typing *.FIC502.* in the Add to plot/list box, the Search will only find tags containing .FIC502. 3. When typing .FIC502F. in the Add to plot/list box, the Search will only find tags containing .FIC502F.
Solution: This is as designed. Search finds items if the search string starts with the typed text, contains the typed text, or is equal to the typed text. A score is defined based on the match and the search results are sorted by score with the highest scores on top. That is why, for example, the tags containing .FIC502F. are shown after the tags containing .FIC502. in the search results when .FIC502. is entered. Keywords: None References: None
Problem Statement: How do service levels impact Aspen Fleet Optimizer optimizations?
Solution: Service Levels are used to assign priorities to customers, which determines the preference that customers receive during automatic optimization. Resource Scheduling Optimizer (RSO) ensures that, after optimization, orders assigned to transports have higher customer service levels than those in overflow that can be switched one for one with an order already on a transport. For example, the RSO could remove an order assigned to a transport and replace it with an order from the Overflow Shipments list if: The orders are compatible and can be switched on the transport in a one for-one exchange. The customer with an overflow order has a higher priority than a customer whose order is already on a transport. Service Levels impact the optimization of split loads as well as single customer deliveries. When creating split loads, Resource Scheduling Optimization (RSO) prioritizes pending orders by runout point, then by Service Level, to calculate an average prioritization level before assigning any of the orders to a split load. For each split-load delivery, the RSO calculates an average prioritization level for must-go orders that make up the delivery. Therefore, a customer with a Service Level of “25” and a runout date of “June 3” will have a higher priority than a customer with a Service Level of “75” and a runout date of “June 4”. Keywords: None References: None
Problem Statement: How to connect AOL to IP.21 using Cim-IO
Solution: Cim-IO for IP.21 interface needs to be established before AOL can retrieve data from IP.21 server. The interface is created in Cim-IO server, which usually installed on IP.21 server or a separate server. 1, Go to the Cim-IO server, open Cim-IO interface manager, add a new Cim-IO interface 2, Once the interface is established, new services will be automatically added. 3, We need create a new logical device for this interface by adding the device into C:\Program Files (x86)\AspenTech\CIM-IO\etc\cimio_logical_devices.def file. The format should be: ‘xxxxx(any new logical device name) xxxx(IP.21 server name) CIMIOSETCIM_200’. As below. 4, Copy the def file and paste it to same file location in AOL workstation. 5, Open Services file in C:\Windows\System32\drivers\etc in Cim-IO server, copy the related ports information and paste it to the same file in AOL desktop. (Ports in blue in this example) 6, Open AOL, in control panel – Tags -Specification form - add new tag, input your tag name, choose tag type as ‘DCS’. 7, Then click ‘Validate’, select the correct CIM-IO device. 8, The description of your tag will be automatically loaded from IP.21 side. Now, we are able to add tags from IP.21 which means the connection between IP.21 and AOL has been established. Keywords: Cim-IO for IP.21 AOL connection References: None
Problem Statement: How to customize Model Summary in Aspen HYSYS?
Solution: The Model Summary in Aspen HYSYS provides summary of stream results and unit operation data in tabular format. This is similar to Workbook but has much more advanced features, is customizable, and can be exported to Excel. The Model Summary is available under the Home ribbon in Aspen HYSYS as highlighted below. The user can customize the Model Summary Grid simply by coping properties and conditions from the a stream and pasting those on the Summary Grid. Copying data from one stream will automatically populate the information for all other streams in the simulation. The copy and paste option is also applicable for each unit operation available in the flowsheet. The Model Summary can be exported to Excel either as a standalone summary or keeping live link via the Aspen Simulation Workbook (ASW). To Export to Excel click on "Send to Excel/ASW" and follow the on screen instructions. If you keep the link with ASW then inputs can be changed in Excel and the results will be updated in the Excel spreadsheet. Keywords: Model Summary, ASW, Workbook References: None
Problem Statement: When Aspen Process Data COM Excel Add-in (PD COM Add-in) is installed on a client computer via a web download from the MES Web Server and the data source is located in a different domain from the client computer's domain, the PD COM Add-in may be extremely slow or not responding when it tries to access the data source. This is because in this configuration it uses WCF Framework to send and receive data which requires proper user authentication across different domains. To make the PD COM Add-in work in this configuration, one must implement some non-standard settings in the Microsoft IIS Manager. Note: Please use the following solution to help resolve the Excel COM addin "Error: Invalid URI: The hostname could not be parsed"
Solution: Here is what needs to be done to make the PD COM Add-in work in this configuration: • Open IIS Manager on the MES Web Server and select Application Pools in the left pane • Right click on AspenProcessDataAppPool in the right pane, select Advanced Settings from the Context menu and change the Managed Pipeline Mode to Classic • Recycle the application pool by right-clicking on the pool and choosing 'Recycle' from the context menu • Now, back in the left pane, expand Sites | Default Web Site | Web21 and select Process Data • Double-click the Authentication object in the right pane and change the Status for 'ASP.NET Impersonation' to 'Enabled' • You can also disable Basic Authentication on the same page • Start MS Excel and test various Process Data Add-in functions KeyWords Addin slow "Error: Invalid URI: The hostname could not be parsed" Keywords: None References: None
Problem Statement: Error “Invalid CASE cell CASE 1000 in table case….” When running PIMS.
Solution: This is the limitation of the PIMS. The max case number PIMS can use is 999, so case number 1000 is invalid. CASE number 1 to 999 within 3 digits string can be used on the PIMS, so CASE 01 and CASE 001 are also valid. Keywords: CASE Error References: None
Problem Statement: What vales are used by Aspen Fleet Optimizer Sales Import service?
Solution: The data type is determined by the value of the SIUPD_MODE field in the TCIF_SIIMP table: 0 = Sales Only 1 = Inventory Only 2 = Sales and Inventory When both Sales and Inventory are entered, they may or may not refer to the same period of time. This is determined by the value of the inventory date (INV_DATE) and of the sales date (SALES_DATE). If the inventory date is not equal to the sales date + 24 hours, inventory and sales data are considered separate; they do not cover the same period. To report the sales amount for a period of time (starting at SALES_DATE) and the inventory amount (at the end of the same Sales period) in the same record, the sales date must be exactly 24 hours less than the inventory date. Keywords: None References: None
Problem Statement: What is Aspen Fleet Optimizer Order Manager Web?
Solution: The Web-based Aspen Fleet Optimizer Order Manager – Web (Order Manager – Web), solution allows petroleum organizations to leverage real-time, mission-critical information across the supply chain. Order Manager - Web is the ultimate petroleum supply chain Customer Relationship Management (CRM) tool, uniting a company with its customers, suppliers, vendors, and shippers to minimize operating costs and maximize profits. Implemented in combination with Aspen Fleet Optimizer, Aspen Fleet Optimizer Order Manager - Web offers unparalleled CRM convenience, cost savings and operation efficiency through secure, web-based information access and data exchange. Keywords: None References: None
Problem Statement: When installing Tomcat, it is possible to change the default port number Tomcat usually installs on to some other port. This may cause Aspen Search not to work correctly. This Knowledge Base article provides a list of files to check when the Tomcat port number was changed from default 8080 to some other port. This is to make sure the files have been updated accordingly.
Solution: Open the following files in a text editor, search for 8080 and replace with the new port number (8081 for example): C:\inetpub\wwwroot\AspenTech\ProcessData\AtProcessDataREST.config C:\inetpub\wwwroot\AspenTech\ProcessExplorer\WebControls\AtWebPlotsConfig.xml C:\inetpub\wwwroot\AspenTech\aspenONE\App_Data\config.xml C:\Program Files (x86)\Common Files\AspenTech Shared\TomcatX.X.XX\conf\server.xml C:\Program Files (x86)\Common Files\AspenTech Shared\TomcatX.X.XX\appdata\scheduler\config\jobs\NPEScanData.xml C:\Program Files (x86)\Common Files\AspenTech Shared\TomcatX.X.XX\appdata\scheduler\config\jobs\NPEScanFiles.xml C:\Program Files (x86)\Common Files\AspenTech Shared\TomcatX.X.XX\appdata\scheduler\config\jobs\ScanAPRMDataSources.xml C:\Program Files (x86)\Common Files\AspenTech Shared\TomcatX.X.XX\conf\context.xml Note: Context.xml file may have had the old Tomcat port number added to it manually - it would not exist here by default but would need updating if it exists, as follows: 1. Stop the Tomcat service and add this entry into the context.xml file in the <Context> xml node: <Parameter override="false" value="http://localhost:8081/solr" name="solr.url"/> 2. Start the Tomcat service. This entry will override the default and allow you to use port 8081 over 8080 in the Scheduler application. The context file should look something like this: C:\inetpub\wwwroot\AspenTech\Web21\WebControls\AtWebPlotsConfig.xml (this is for search within IP.21 Process Browser) C:\inetpub\wwwroot\AspenTech\Web21\WebControls\AtDetailSearch.asp (this is for search within IP.21 Process Browser) C:\inetpub\wwwroot\AspenTech\DispatchService\Metadata\web.config (this is for search within pre-v9.0 IP.21 Process Browser) Keywords: References: None
Problem Statement: Which License is required to run Hysys Dynamic run time through ASW
Solution: To leverage HYSYS Dynamics Run-Time through ASW, it is required the following licenses: Aspen HYSYS Dynamics Run-Time: 8 Token Aspen Simulation Workbook: 2 Token Keywords: None References: None
Problem Statement: When installing Windows operating systems, the inetpub directory by default installs into C:\. Some customers have for some reason moved this IIS directory to a different volume other than root C: Once the IIS directory is moved to a different volume other than C:\ , let us say F:\ , this would not be recognized by Aspen InfoPlus.21 installer and after installation AspenTech InfoPlus.21 files that reside under inetpub would recreate the directory inetpub and install in C: \inetpub regardless.
Solution: Moving IIS to a non-system drive is not supported by Microsoft as of Windows Server 2012 R2 and newer. This is because IIS is now a core Windows component and cannot be installed on a non-system drive. For more information please refer to Aspen KB Article # 141446: https://esupport.aspentech.com/S_Article?key=141446 To summarize, AspenTech no longer supports any installations where the inetpub directory has been moved to a non-system drive. Keywords: inetpub IIS wwwroot References: None
Problem Statement: What are the advantages of Aspen Fleet Optimizer Fleet Operations?
Solution: Fleet Operations gives terminal dispatch personnel the ability to manage transport delivery activities and schedules. Fleet Operations and core Aspen Fleet Optimizer applications run from a single production database This allows for direct access to critical data and the elimination of time lags and data errors. It makes for easier information sharing within the supply chain. This gives the terminal personnel the ability to view all must go shipments for a given terminal, date, and shift. It also displays the shipments assigned to individual transports. Along with the ability to see all must go hours for a given terminal, date, and shift. Keywords: None References: None
Problem Statement: This Knowledge Base article provides steps to install a new security certificate for Aspen InfoPlus.21 server called 'AspenTech InfoPlus.21 IP21 OPCUA Server' when the certificate is corrupted or missing.
Solution: 1. Launch Aspen InfoPlus.21 Manager (IP.21 Manager). 2. Double click TSK_OPCUA_SVR task from the Defined Tasks list. 3. Determine the location of the executable which is specified in the Executable textbox. Example: C:\Program Files\AspenTech\InfoPlus.21\db21\code\IP21OpcUAServerHost.exe 5. Stop TSK_OPCUA_SVR task in IP.21 Manager. 6. Launch the Windows OS command prompt using the Administrator rights (Run as Administrator). 7. Change directory to the location specified in step 3. Example: cd C:\Program Files\AspenTech\InfoPlus.21\db21\code 8. Uninstall the existing certificate using the command below: IP21OpcUAServerHost /uninstall 7. To verify the existing certificate is removed, launch 'UA Configuration Tool' and select 'Manage Certificates' tab. 8. Select Store Type: Directory 9. Select Store Path: C:\ProgramData\AspenTech\InfoPlus.21\CertificateStores\MachineDefault 10. Click the 'View Certificates...' button. The 'Manage Certificates in Certificate Store' dialog should not display the 'AspenTech InfoPlus.21 IP21 OPCUA Server' certificate. 11. Close the 'Manage Certificates in Certificate Store' dialog opened in above step. 12. In the 'Manage Certificates' tab, select 'Store Type: Windows' 13. Select Store Path: LocalMachine\UA Applications 14. Click 'View Certificates...' button. The 'Manage Certificates in Certificate Store' dialog should not display certificate of 'AspenTech InfoPlus.21 IP21 OPCUA Server' 15. Close the 'Manage Certificates in Certificate Store' dialog opened in above step. Note: Ensure there are no other copies of certificate 'AspenTech InfoPlus.21 IP21 OPCUA Server' at different Store Paths. If so, then you will have to remove all those certificate copies. 16. Switch to Command prompt in step 5 and install the new certificate using below command: IP21OpcUAServerHost /install 17. Close the 'UA Configuration Tool'. 18. To verify new certificate is installed, launch the 'UA Configuration Tool' and select 'Manage Certificates' tab. 19. Select Store Type: Directory 20. Select Store Path: C:\ProgramData\AspenTech\InfoPlus.21\CertificateStores\MachineDefault 21. Click 'View Certificates...' button. The 'Manage Certificates in Certificate Store' dialog should display the 'AspenTech InfoPlus.21 IP21 OPCUA Server' certificate, as shown below. 22. Double click the certificate of 'AspenTech InfoPlus.21 IP21 OPCUA Server'. 23. Verify the certificate contains the new machine name, MES in the above example. 24. Close the 'View Certificate' and the 'Manage Certificates in Certificate Store' dialogs opened in above steps. 25. In the 'Manage Certificates' tab of the 'UA Configuration Tool', select Store Type: Windows 26. Select Store Path: LocalMachine\UA Applications 27. Click 'View Certificates...' button. The 'Manage Certificates in Certificate Store' dialog should display certificate of 'AspenTech InfoPlus.21 IP21 OPCUA Server' 28. Double click the certificate of 'AspenTech InfoPlus.21 IP21 OPCUA Server'. 29. Verify the certificate contains the new machine name. 30. Launch IP.21 Manager, select TSK_OPCUA_SVR task and start it. 31. Start AspenOPCUA Explorer to verify that the Aspen IP.21 server appears in the trusted applications list. Note: The config file for IP21OPCUA Server, tsk_opcua_server.opcua.config.xml, usually located in the ‘C:\ProgramData\AspenTech\InfoPlus.21\db21\group200\’ directory, contains some information associated with the registration of the UA Server, as seen in this example: <Thumbprint>46FC987F053374F20CBAA78ADB6B2516CFF59400</Thumbprint>. This file is updated every time the server is registered. This file is required for the registration of the OPCUA Server. If the config file is missing then you may copy the attached initialized (blank) version of the file and then register it on the server, as described above, which should update it appropriately. I recommend these steps: 1. Stop the TSK_OPCUA_SVR in IP21 Manager 2. From an elevated Windows OS command prompt, run: IP21OpcUAServerHost.exe /uninstall 3. Copy the UA config XML file into the group200 folder under ProgramData. 4. From an elevated command prompt, run: IP21OpcUAServerHost.exe /install 5. Start the task in the IP21 Manager. Keywords: References: None
Problem Statement: How does external data get into Aspen Fleet Optimizer?
Solution: Aspen Fleet Optimizer relies on a number of NT services to pull external data from interfaces tables in to the core Aspen Fleet Optimizer tables. NT Services are utilities that process information behind the scenes of the Fleet Optimizer application. NT Services are usually installed on a separate server and run as distinct functions. The NT Services read, write, and process data from, or to (in the case of the Notification Service), the TCIF or common interface tables. Keywords: None References: None
Problem Statement: What are the advantages of Aspen Fleet Optimizer Fuels Management?
Solution: Fuels Management provides selective Fleet Optimizer functions in a web based environment. This allows customer order entry and management, including the ability to modify, delete, view, and print orders. The ability to confirm delivery of scheduled order by product or compartment. The option to review sales and inventory data for inventory managed customer accounts. This provides convenient information access and visibility for customers, terminals, drivers, and dispatchers. It allows for error reduction through direct data input. Easier information-sharing and visibility between supply chain participants. It also encourages shared accountability across the supply chain. Keywords: None References: None
Problem Statement: The 'Set Start/End Time' tab of the Timeline Settings dialog box in aspenONE Process Explorer (shown below) allows users to set the trend plot Timespan, Start and End time of the plot and the Timespan Sections, which contains two components: Number of time periods and Orientation (Past and Future). This KB Article explains the purpose and functionality of the “Future” Timespan Sections feature.
Solution: The "Future" Timespan Sections feature can be used with Standard trend only. It allows for the plot to be shifted by a predefined period during a trend update. The plot will shift in increments determined by the time period, based on an evenly divisible boundary with respect to the timespan. There are two selections to make: • Number of time periods can be selected from 0 (default) to 1, 2, 4, and 8 • Orientation for the time periods can be either Past (default) or Future For example, selecting 4 sections and Future time orientation, with the timespan in the plot set to 1 day (24h) with current time of, say, 11:41 am, the plot would start at 6:00 am for the current day (evenly divisible boundary – 24h/4=6h) and end at 6:00 am the next day. At noon, the plot would advance and then start at noon of the current day and end at noon the following day. If the orientation was Past, instead of Future, the plot would start at noon the previous day and end at noon on the current day. Keywords: Past Future References: None
Problem Statement: This knowledge base article describes which user name and password should be used when configuring the Process Data for PHD sub-component in the Aspen Data Source Architecture (ADSA) configuration.
Solution: Honeywell PHD is a data historian which is based on an Oracle relational database. Aspen Process Explorer can plot data from the Honeywell PHD historian if the Aspen Process Data (PHD) sub-component is added to the ADSA configuration. The Aspen Process Data (PHD) sub-component is added to the list of available sub-components using the ADSA Client Config Tool. The configuration screen for the Aspen Process Data (PHD) sub-component is shown below. The user name & password specified in the Aspen Process Data (PHD) sub-component configuration dialog box are not the user name and password for the Oracle database used by the Honeywell PHD historian. Rather, the required user name and password are for a Windows account which is used to access the Honeywell PHD server. Important note: To use Aspen Process Data (PHD) service to connect to your PHD database, you must have the PHD client installed on your ADSA server computer before you install AspenTech products. Additionally, some AspenTech product functionality is not available with the 3rd party databases, for example, tag browsing is not available for PHD systems.. Keywords: account service logon login References: None
Problem Statement: What is the Constant Volumetric efficiency loss parameter (L) input for a reciprocating compressor?
Solution: The volumetric efficiency, VE, is defined as the actual pumping capacity of a cylinder compared to the piston displacement volume. Please find below the equation used for volumetric efficiency: VE[%] = (100 - L) - r - C*( Z*( r^( 1/k)) - 1) L = constant volume efficiency loss (%) ; effects of variable such as internal leakage, gas friction, pressure drop through valves, and inlet gas preheating C = clearance volume (%) Z = compressibility factor ratio (Z suction / Z discharge) r = compression ratio (P discharge / P suction) k = heat capacity ratio (Cp/Cv) As per the Aspen HYSYS Operations Guide (available via the Documentation link), an arbitrary value about 4% VE loss is acceptable to account for losses at the suction and discharge valve. For a non-lubricated compressor, an additional 5% loss is required to account for slippage of gas. If the compressor is in propane, or similar heavy gas service, an additional 4% should be subtracted from the volumetric efficiency. These deductions for non-lubricated and propane performance are both approximate, and if both apply, cumulative. Thus, typical values of L vary from 0.04 to 0.15 (or more) in general. Keywords: Reciprocating, Compressor, Constant, Volumetric, Efficiency, Loss References: None
Problem Statement: After I select type of impingement plate option,it shows red color. What red color mean?
Solution: If the shell side nozzles is specified 'No Impingement' plate, but on impingement tab user specified impingement device(shroud), that is why the error message is encountered. To resolve it, go to Input | Exchanger Geometry | Nozzles | Shell side nozzles, then specify " Yes impingement" on the inlet for impingement device and then run the case. It will work without a problem. Keywords: None References: None
Problem Statement: BCU searches the IP.21 history to locate instances where Trigger Conditions specified in unit scripts are true. These conditions are defined by the user in the BCU Administrator, BCU Config box, Conditions tab. Each condition has the following format: <Operand1><Operator><Operand2> where an Operand can be a Tag Name, an Alias, or a Constant. The drop down menu for the Operators reveals a standard list of logical operators as well as an operator called "Change". What does the operator called "Change" represent?
Solution: The operator called "Change" represents any change of value for the tag listed as Operand1. When "Change" is selected, no Operand2 is necessary. NOTE: If any triggers use the "change" operator, then the Trigger type must be set to "State". Transition and Duration are not valid choices. KeyWords: Keywords: None References: None
Problem Statement: This solution provides an Aspen SQLplus query to return all the references to a manually-entered tag name in the Aspen InfoPlus.21 database.
Solution: procedure findchar24 (char24 char(24), qname record) local i,j,k int; local c char(80); j =(select #QUERY_LINES from QueryDef where name = qname); for i=1 to j do c=(select query_line[i] from QueryDef where name = qname); c=upper(c); k=POSITION(upper(char24) IN c); If k > 0 then write char24 ||' was found in line of ' || i ||' in '|| qname; end end end local filename char(256), s string, r record, found_field field, ref_type char(20), dummy char(20),tname char(24); macro sqlfileslocation = 'C:\ProgramData\AspenTech\InfoPlus.21\db21\group200\sql\'; s = PROMPT('Enter tag name'); macro searchstring = s; r = s; Write '&searchstring'||' appears in the following CompQueryDef records:',' '; for (select name as QueryName from compquerydef where queryname||'.sql' in (select line from system 'dir/b &sqlfileslocation')) do filename = '&sqlfileslocation'||queryname||'.sql'; select QueryName, linenum from file(filename) where position(upper('&searchstring') in upper(line)) <> 0; set column_headers=0; end Write ' ','&searchstring'||' appears in the following ProcedureDef records',' '; for (select name as QueryName from compquerydef where queryname||'.sql' in (select line from system 'dir/b &sqlfileslocation')) do filename = '&sqlfileslocation'||queryname||'.sql'; select QueryName, linenum from file(filename) where position(upper('&searchstring') in upper(line)) <> 0; set column_headers=0; end Write ' ','&searchstring'||' appears in the following QueryDef records:',' '; FOR (Select name "nym" from QueryDef) DO tname=nym; findchar24 (s,tname); END Write ' ','&searchstring'||' appears in the following non-Query references: ',' '; found_field = nxtrefer(NULL, r); while found_field is not null do ref_type=(select Definition from All_Records where Name=substring(1 OF found_field)); write found_field, ref_type; found_field = nxtrefer(found_field, r); end Keywords: SQLplus Show references References: None
Problem Statement: Aspen Calc has a built-in ability to do unit conversions if they are configured correctly. This solution documents all steps needed for a custom calculation to automatically reconcile any unit differences when new values comes into the database.
Solution: First make sure that the units necessary are configured in the Repeat Area of ENG-UNITS. Use the Aspen InfoPlus.21 administrator to add the engineering units to ENG-UNITS. Open Aspen Calc and go to File | New | Unit Conversion to add new unit conversions. Add any unit conversions that do not already exist that are applicable to the calculation. Take note of what the standard calculations are that are already configured. For example, note how there are .3048 meters in a foot (by observing data from the distance conversion type). Select New under the Calculation: toolbar and enter a formula name. On the next screen, select CalcScript from the New Formula tab and proceed to the script editor. Type the formula in script editor. Be sure to include all variables that will be used in the calculation. Because this is a CalcScript, Aspen Calc will automatically define all variables used. The variables are defined on the next screen. They can be bound to tags in Aspen InfoPlus.21 using either the Tag Browser or by double-clicking the row containing the tag's attributes. Once these tags are bound, double click the tag row to open the variable properties. The tag will be already configured with units, but these will have to be changed into units that are defined in Aspen Calc. Now switch to the General tab and select the unit that should be used in the calculation. Aspen Calc will automatically draw current values, convert them, do the calculation, and convert the output value as needed. Change the Engineering Units on all tags as needed. The Calculation Wizard should now reflect the units that will be used in the calculation. These units should make sense in the context of the calculation used. Remove the Calculation from "TestMode" and press Execute. Notice how all values have been converted from the values that show up in InfoPlus.21 and the units in the calculation work as expected. The value of the output tag, after being converted into its original units, should also be reflected in the most recent trend value. This would indicate that the calculation worked correctly. This calculation can be scheduled or run as needed. Keywords: Engineering Units Aspen Calc References: None
Problem Statement: Aspen Mtell View loads for the first time, but only displays the barebone architecture in the browser, as shown below.
Solution: This issue occurs if the “Static Content” HTTP feature is not enabled for IIS within Server Manager. To resolve this issue, open Server Manager, select the option for “Add roles and features”, step through the wizard until the “Server Roles” page and locate the option for “Web Server (IIS)”. Expand Web Server, the Common HTTP Features, and make sure the option for “Static Content” is checked. Once this is done, finish stepping through the wizard and reset IIS. Reload the page in Aspen Mtell View and the icons and page structure should now show up. Keywords: Aspen Mtell View blank page References: None
Problem Statement: This Knowledge Base article explains the purpose of the Alarm Type column on the aspenONE Process Explorer Alerts page.
Solution: The Alarm Type column on the Alerts page in aspenONE Process Explorer (a1PE) indicates what the alarm is based on. In other words, if it’s set to Value, it simply means that a particular alarm state is based on the value of a tag as defined in the database (such as High High, High, Low, Low Low). The Alarm Type column can have other values. For example, for IP_AnalogDef tags, the field can be configured to have the following values: Rate of Change, High Severity, Middle Severity, Low Severity It may be different for different types of records. Using the same example of tags defined by IP_AnalogDef definition record, you can change the Alarm Type by going to IP_AnalogMap Map_#AlarmInfo (repeat area) and clicking on the Value under "MAP_AlarmType" This is not a user configurable field; it needs to be set by the system Administrator and it affects all tags in the definition record. Keywords: References: None
Problem Statement: Live agents in Aspen Mtell stop processing and the Event History page in the Aspen Mtell System Manager shows Warning messages from the Agent Service reading CBM Thread: Invalid sensor reference ID ## in machine learning profile for Live Agent “XXX” is not from a historian. Cannot Process Profile…
Solution: This message indicates that the system cannot find tag number ## from the historian that is needed to run the live agent. To find the tag referenced in the error message, open SQL Server Management Studio, open the MtellSuite database, and search for all values in the table [CBM].[Tag Keywords: CBM Thread Live Agents not processing References: ]. The historian tag associated with the ID number mentioned in the error is the tag that is not updating or not discoverable from the historian.
Problem Statement: The AlertUserDef definition record is used by the Alerts feature in the aspenONE Process Explorer (a1PE). This Knowledge Base article provides a description of the fields in the AlertUserDef definition record.
Solution: IP_EMAIL_ADDRESS is a 50-byte fixed-length character string field. The a1PE application updates this field with the user’s email address. IP_EMAIL_DAYS is a 7-bit integer field formatted by UB7. Each bit represents a day of the week. TSK_ALRT will send email alerts to the user on days where the corresponding bit is set. · Bit 1 set – Monday · Bit 2 set – Tuesday · Bit 3 set – Wednesday · Bit 4 set – Thursday · Bit 5 set – Friday · Bit 6 set – Saturday · Bit 0 set – Sunday IP_EMAIL_START_TIME and IP_EMAIL_END_TIME are 20-bit integer fields formatted by DM7. Each field is intended to hold an offset (in 0.1 seconds) since the start of the day. The a1PE application updates these two fields with the start time and end time of a time range during which the user can be sent email alerts. IP_EMAIL_TIMEZONE is a 128-byte fixed-length character string field. Used for user time zone information so that times can be shown in user defined time zone in email. #DATA_BASE_FIELDS is a repeat count field. It is formatted by I4. This field is normally set to some reasonable initial value (e.g. 20) when the alert user record is initially created by a1PE. The a1PE application may occasionally increase or decrease the repeat count by some reasonable amount as needed. Any change to this field will cause the alert user record to become activated. RECORD_&_FIELD_NAME is a field pointer field. The a1PE application may update a blank occurrence of this field with an integer field of interest to the user. Or a1PE may clear this field when the user unsubscribes to the field. Any change to this field will cause the alert user record to become activated. IP_SPC_ALARM is a one-bit, unsigned integer field formatted by NO/YES. The a1PE application sets this field to YES if the referenced field is an SPC alarm field. IP_ALERT_CRITERIA is a 32-bit integer field formatted by UB32. The a1PE application sets the contents of this field. Each bit indicates if a corresponding alarm state should be considered to be an alert. IP_EMAIL_REQUESTED is a one-bit, unsigned integer field formatted by NO/YES. The a1PE application sets this field to indicate if an email should be sent for the subscribed field. IP_ACKNOWLEDGEMENT is a one-bit, unsigned integer field formatted by ACK/UNACK. The external task, TSK_ALRT, may set this field to the UNACK state. The a1PE application my reset this field to the ACK state when the user acknowledges the alert. IP_ALERT is a one-bit, unsigned integer field formatted by NO/YES. The external task, TSK_ALRT, will set this field to the YES if the subscribed field meets the alert criteria; otherwise, the field is set to NO. IP_ALERT_TIME is an extended timestamp field formatted by TS20. Last alert time. It is reset to undefined when an alert is acknowledged. IP_ALERT_MAP is a 32-bit signed integer field formatted by UB 32. Last alert information. I.e. type of alert (High, Low etc.) Example - 10000000000000000000000000000000 represents LOWLOW IP_ALARM_STATE ERROR? is a 1-bit unsigned integer field formatted by NO/YES. This field gets updated when alerts task had some issue/error while processing a particular record. Keywords: References: None
Problem Statement: This solution discusses how to change all the records in a definition record to be displayed in either only capital letters or only lowercase letters.
Solution: To update the case of the tag names in the Aspen InfoPlus.21 database, open the Aspen SQLplus query writer and use the commands “UPPER” or “LOWER” to update the tags in a particular definition record. Keywords: Uppercase Lowercase References: None
Problem Statement: When trying to connect to Aspen InfoPlus.21’s OPC DA server, an OPC client may fail to connect even though the Aspen InfoPlus.21 database is running and the user running the OPC client has read permissions to the database. Connecting to the database using Cim-IO for OPC-DA may result in the messages “CIMIO/OPC DOWN” and “CIMIO/OPC REOPEN & READY” repeatedly occurring in the CIMIO_MSG.LOG file.
Solution: The error message indicates that one of the entries in the tags branch is defined without having an associated definition record. To fix this problem, open the Aspen InfoPlus.21 Administrator and search for a record named “IP_TagsBranch”. Check the contents of the repeat area of this record and locate each occurrence where the “PE_BRANCH” field is not populated. You can either delete each occurrence where this happens or populate the “PE_BRANCH” field with a valid definition record. Once these changes are made, restart your OPC client. Keywords: References: None
Problem Statement: I am getting the following error: "iq.exe exited with error code = 1 error" when trying to run TSK_ACTG_SYNC. What does it mean and how to overcome this error?
Solution: The above error normally appears when a tag is configured without a roll-up source. The following query should help you identify the accounting records that are configured without a source record. Once every ActgRecord has a SourceRec, the problem with starting TSK_ACTG_SYNC should not occur. SELECT Name as "ActgRecord", cast("TREND_TIME_FIELD" as record) as "SourceRec", "SourceRec"->Definition as "SourceDefRec", trim ( ' 1' from trim ( trim ( leading cast("TREND_TIME_FIELD" as record) from "TREND_TIME_FIELD" ) ) ) as "HistTimeFld", "SEQUENCE_NUMBER" as "LastProcessedHSN", "Last_Rollup_Time" as "LastRollupTime" FROM "AccountingDef" where name not like 'D-ACTG%' and "Active Sw." = 'ON' KeyWords Keywords: None References: None
Problem Statement: How do I turn Debug on for an Aspen Accounting.21 record?
Solution: This can be accomplished by: ? Execute the following Aspen SQLplus query to determine the record id for each of the Aspen Accounting.21 record defined by AccountingDef: select name, recid from AccountingDef; which will return results similar to these: name RECID ------------------------ ------- D-ACTG 2971 D-ACTGDet 2972 atcaihr 3120 atcaishft 3121 OR Right click on the name of the Aspen Accounting.21 record in the Aspen InfoPlus.21 Administrator and select "Properties" Locate record TSK_ACTG which is defined by ACTGTaskDef in the Aspen InfoPlus.21 Administrator Expand the repeat area called #INTEGER_VALUES Set 'Record ID to debug' to the desired record id returned from the query above Set 'Debug Flag' to 1 Stop and start TSK_ACTG via the Aspen InfoPlus.21 Manager The debug log will be generated in TSK_ACTG.OUT After the Aspen Accounting.21 record has been processed you will need to: Locate record TSK_ACTG which is defined by ACTGTaskDef in the Aspen InfoPlus.21 Administrator Expand the repeat area called #INTEGER_VALUES Set 'Record ID to debug' to 0 Set 'Debug Flag' to 0 Restart TSK_ACTG Keywords: debug AccountingDef TSK_ACTG References: None
Problem Statement: This KB article shows how to take, for example, a Monthly Accounting record, force a roll-up, and then change the Roll-up Period.
Solution: First, to force a Roll-up, you need to go into the record and change the field 'ANY_DEMANDS?' to 'ROLL-UP NOW'. Next, change the field 'ACTIVE SW.' to 'OFF'. Now, change the field 'ACTG_ROLLUP_PERIOD' to the new desired rollup period. Finally, reset the field 'ACTIVE SW.' to 'ON'. When you look in the '#ACTG_TREND_ROLLUPS' History repeat area you will see 3 new entries: One at the time the Rollup was forced. One at the time the Active SW was turned Off. One at the time the Active SW was turned On. From now on, Roll-ups will be calculated at the period defined in Actg_Rollup_Period KeyWords Keywords: None References: None
Problem Statement: The field, STORE_&_FORWARD_MODE, was added to records defined by AccountingDef in version 6.0 with the advent of the Store and Forward (S&F) feature of Aspen Accounting.21. This field has caused a lot of confusion with users. This solution details the field's use, and explains why it should not be manually adjusted.
Solution: The Aspen Accounting.21 User's Manual describes the S&F functionality. The field, STORE_&_FORWARD_MODE, however, is not explicitly explained. The field is updated by TSK_ACTG and TSK_SNF. It is NOT to be manually adjusted. When TSK_ACTG detects a forwarded value for an accounting record, it sets the field to "ON". Then TSK_ACTG activates one or more of the TSK_SNF tasks to recalculate whatever rollups need recalculated. When the TSK_SNF task(s) are finished recalculating the necessary records, it/they will set the STORE_&_FORWARD_MODE field to "OFF". At that point, TSK_ACTG again becomes responsible for the normal updates and rollups of the accounting records. Since the STORE_&_FORWARD_MODE field is updated by either TSK_ACTG or TSK_SNF, do NOT manually change it. KeyWords Keywords: None References: None
Problem Statement: The Accounting.21 User's Manual explains how to create Accounting Associations and AccountingDef records one association at a time using GCS. Knowledge Base article 108516 describes how to create these same records using the InfoPlus.21 Administrator. This article explains how to use the DOS utility, ACTG_LOAD, to create AccountingDef records in mass. This article assumes a basic understanding of Accounting.21 functionality **.
Solution: The ACTG_LOAD utility reads a space delimited text file containing "source point" tag names. It then creates the accounting associations and the AccountingDef records. The format of the file is: SourcePointName GroupName Divisor Here is an example file: atcf101 Group2 0 atcf102 Group2 0 atcl101 Group2 0 atcl102 Group2 0 Save this file with a .txt extension. In this case, it is saved as c:\load.txt. To run the utility, go to a DOS prompt and to the actg\code directory (C:\Program Files\Aspentech\InfoPlus.21\db21\actg\code) and type: actg_load Enter input file: c:\load.txt (After inputting the file name, the utility runs...) Opening file c:\load.txt... Create Accounting For atcf101... Create Accounting For atcf102... Create Accounting For atcl101... Create Accounting For atcl102... When the DOS prompt returns again, you should have an association for each source point and the appropriate number of AccountingDef records for each association. **See Solution 108516 (https://esupport.aspentech.com/S_Article?key=108516) for an overview of all of the records involved with Accounting.21 including a description of groups and associations. See Solution 108649 for a description of Divisors. KeyWords: ACTG_LOAD Keywords: None References: None
Problem Statement: The function of the CM_CLEAR_ALERT field in records defined by CMLimitDef is not documented.
Solution: The CM_CLEAR_ALERT field is not used. No reference to this field exists in the code, and a review of CMLimitDef indicates that no activations are associated with this field. KeyWords Keywords: None References: None
Problem Statement: The AspenONE Process Explorer Admin page allows a user to automate a full scan of all configured data sources once per day at a schedule time. This solution discusses how a second daily scan can be configured.
Solution: Navigate to http://<server_name>:8080/AspenCoreSearch to access the Scheduler Configuration page. You may need to log in to the scheduler page (default credentials are username: admin and password: admin). In the Scheduler Configuration page, there are a number of preconfigured jobs that maintain the aspenONE Process Explorer configuration. The job used for scheduling a daily scan is “NPEScanData”. To force a second daily scan, change the schedule frequency from “Every 1 day” to “Every 12 hours”. You may also create a new job and duplicate the information in the NPEScanData job. Keywords: Daily File Scan References: None
Problem Statement: My DCS server clock is behind the Aspen InfoPlus.21 server clock by about 3 minutes and many AccountingDef records get stuck in "waiting for Store and Forward" status. Is there any way to correct this without having to synchronize the clocks on both servers?
Solution: To resolve the issue, try changing the 8th integer parameter in TSK_ACTG, the one named ACTG Rollup Closing Delay, to a value greater than the time difference between the servers, such as 5 minutes (300 seconds). Keywords: None References: None
Problem Statement: AspenTech applications require that a Aspen Local Security (ALS) or Aspen Framework (AFW) server is running and available via the network. Applications will fail to launch if the security server is not available. This KB article shows how to configure a system running AspenTech applications to connect to an ALS or AFW server.
Solution: Follow these steps to configure a system running AspenTech applications to connect to an ALS or AFW server: 1. Verify that AFW or ALS security server is running on a system that is accessible on the network 2. Run AFW Tools (click the Start icon on the Windows Task Bar, type AFW Tools and click on the app that appears in Windows Search results). 3. Select Client Registry Entries tab and enter the node name of the security server in the URL field. Example: "http://PlantSecServer/Aspentech/AFW/Security/pfwauthz.aspx" if security server is "PlantSecServer". 4. Delete the four cache files typically located in the C:\ProgramData\AspenTech\AFW folder. KeyWords Init Cache Com Error Keywords: None References: None
Problem Statement: When using the CreateAssoc record to create AccountingDef records, the AccountingDef records physically get created, but the TREND_VALUE_FIELD and TREND_TIME_FIELD are not getting populated. There is an error in the created accounting records' LAST ACTION field: Invalid TREND_VALUE_FIELD field.
Solution: The AccountingPeriods record can only have 20 occurrences in order for the CreateAssoc to work properly. Get rid of any occurrences past number 20. Be careful, however! AccountingPeriods is a type of Selector record. Deleting occurrences may affect other records. KeyWords CreateAssoc Keywords: None References: None
Problem Statement: It is possible for the Compliance Monitor reports to display incorrect data for the case in which old (no longer valid) open excursions exist. Examples of the incorrect data which can exist in the reports are: The calculated percent compliance is incorrect The Total Time Out is greater than the reporting period This knowledge base article provides a procedure which allows the next cycle of Compliance Monitor reports to be calculated with correct data.
Solution: 1. The first step is to manually close all open excursions in the system. The following query will close all open excursions. An open excursion is an excursion which has a defined start time, but no end time. Update CMLimitDef Set Hist_Time_Out_Alert = 'DD-MMM-YY HH:MM:SS' Where Hist_Time_Out_Alert is Null; In the above query, 'DD-MMM-YY HH:MM:SS' is a timestamp. This will normally be the current timestamp. 2. Toggle Enable_Monitoring off/on at the CMLimit record level or the CMGroup record level. Note: The Disp_Disable_Reason field must be populated when monitoring is disabled. Update CMLimitDef Set Disp_Disable_Reason = 'Restart Monitoring'; Update CMLimitDef Set Enable_Monitor = 'No'; Update CMLimitDef Set Enable_Monitor = 'Yes'; 3. Restart TSK_CMON and TSK_CMRP After all excursions are closed the compliance monitor reports will contain no data until new excursions are opened. In order for the compliance monitor to register an excursion, there needs to be a bad, then a good, then another bad value (bad = limit violation) in the data record (the record that feeds the CMLimit record). This means that the compliance monitor may not register the very first time a limit is exceeded. Keywords: Empty Blank Wrong Data Incorrect Bad Bogus References: None
Problem Statement: What do the different options UPDATE NOW, ROLLUP NOW, and INITIALIZE in the field "ANY_DEMANDS?" do in records defined by AccountingDef?
Solution: The "ANY_DEMANDS?" field is used internally by TSK_ACTG when it is time for the period update or rollup. It can also be used "manually" in the Aspen InfoPlus.21 Administrator or via Aspen SQLplus when TSK_ACTG is for some reason "behind" (i.e. It is not processing the current source data). The 3 options for the field include: UPDATE NOW, ROLLUP NOW and INITIALIZE. An UPDATE NOW does "ALL" statistical calculations starting from the RESET_PERIOD_BASE timestamp to current time ("UPDATE NOW" time). For example, if that same hourly AccountingDef record is one day or 24 hours "behind", the UPDATE NOW option will perform 24 calculations on all 24 missed hourly calculations and will write an occurrence in history for each missed calculation (i.e. 24 occurrences will be written). The ROLLUP NOW option does 1 statistical calculation (for average, min, max, std deviation, etc) from the RESET_PERIOD_BASE timestamp in the fixed area of the AccountingDef record up to current time ("ROLLUP NOW" time) and writes that calculated value to a new occurrence of history. For example, if an hourly AccountingDef record is one day or 24 hours "behind", the ROLLUP NOW option will perform 1 calculation for the entire 24 hour period and write that occurrence to history. An INITIALIZE causes the values for averages and totals to NOT be rolled up and written to history. Instead, the values are set to zero, undefined, or to the most recent history value. Keywords: None References: None
Problem Statement: After an ungraceful shutdown there are times when ACTG.EXE uses 100% CPU after the startup, and it appears that TSK_ACTG is "stuck" processing one record. To find out if TSK_ACTG is indeed "stuck", using the IP.21 Administrator, right click on TSK_ACTG. Choose Properties and then the External Task tab. Is the "CURRENTLY PROCESSING" changing or not? If TSK_ACTG is using high CPU and the "CURRENTLY PROCESSING" queue is not changing, use this article to clear out the TSK_ACTG activation queue.
Solution: Stop TSK_ACTG from the IP.21 Manager. Open a command prompt and type the following: c:\ cd %setcimcode% < enter > c:\ set SETCIM_PROCESS_NAME=TSK_ACTG < enter > c:\ plantap < enter > If the C prompt does not return within a couple of minutes, kill the command prompt window. Run TSK_ACTG_SYNC task from the IP.21 Manager. When TSK_ACTG_SYNC is finished, start TSK_ACTG. Check the External Task tab from TSK_ACTG. Are the records changing or still "stuck" on one record? If changing, let the system run and catch up. If still stuck, call Support. KeyWords: CPU TSK_ACTG activation Keywords: None References: None
Problem Statement: The Aspen Accounting.21 User's Manual explains how to create AccountingDef records using the "sun-set" (no longer distributed) tool named GCS. GCS, which is listed in the manual as a requirement, is not actually necessary, though it does simplify the record creation process. Accounting.21 relies on specific GCS displays distributed with the product to configure it. This document outlines how to configure the AccountingDef records (and the necessary supporting records) without using GCS. Instead the InfoPlus.21 Administrator is used. This document is not meant to replace the User's Manual, but rather supplement it.
Solution: To configure Accounting.21 data records, it is necessary to first create and configure multiple supporting records including: group, priority, list, and period records, association defaults, and association records. In this document, the following example will be used: The raw data comes from an IP_AnalogDef record ("ATCAI") which will be the source to an hourly AccountingDef record ("ATCAI.HR"). The hourly will be the source to a daily AccountingDef record ("ATCAI.DAY"). The daily will be the source to a monthly AccountingDef record ("ATCAI.MON"). The necessary supporting records will be created and then the actual Accounting records will be created. Attached to this article is a Word document containing screen shots of all of the records described. Supporting Records Accounting.21 data records must belong to "groups". This feature allows a user to turn Accounting.21 "on" or "off" for an entire group or set of records. To configure groups, there is a record called "ACCOUNTING GROUPS" defined by ActgUpDownDef. This is a selector type record in which the number of groups that you want to have is determined by opening the repeat area count field, #_OF_SELECTIONS. In each selection, provide a name for the group in the SELECT_DESCRIPTION field and a status of "UP" ("ON") or "DOWN" ("OFF") in the GROUP_STATUS field. Priority records specify the activation order of AccountingDef records at the end of an update period. Records at a high priority are processed before records activated at a low priority. If at a point in time, an hourly and a daily update are scheduled to occur, you would want the hourly to occur before the daily. The priority on the hourly update should be higher than the priority on the daily. "ACTG_PRIORITIES", defined against ActgPriorityDef, is a selector type record that automatically has 3 priorities: low, medium, and high. Each has an integer value associated with it, noted in the PRIORITY field. The higher the integer value, the higher the priority. It is recommended that you have a PRIORITY for each period in your cascaded Accounting periods. Update periods and rollup periods are configured in the record, "ACCOUNTING PERIODS". Determine what periods are needed. At what interval do you need a value written to history? Hourly, Daily, etc.? This/these will be the rollup period(s). At what point do you want a current "to the moment" calculation? This will be the update period. All periods, whether update or rollup, are configured in the "ACCOUNTING PERIODS" record. But, before configuring "ACCOUNTING PERIODS", it is necessary to create rollup list records. Rollup list records, defined against Actg_ListDef, are used internally by TSK_ACTG to know what AccountingDef records to activate for a particular update interval. There must be one list record for each update and rollup period. To create rollup list records, there is a master record called "CreateList", defined against CreateListDef. In the "RECORD_NAME" field, type in the name you will use for your list record, such as "Hourly List" or "Daily List". Then, in the ACTG_CREATE field, type "YES". You should then see your new list record under ACTG_ListDef. Repeat for all other list records needed. Now, to configure update and rollup periods, go to the "ACCOUNTING PERIODS" record. This is a selector type record where there is one occurrence for each update and/or rollup period. For each occurrence, enter: A period name in the SELECT_DESCRIPTION field. A reschedule interval for the period. (i.e. 01:00:00.0 for an hourly interval or 24:00:00.0 for a daily interval.) If you have a period based upon a "special rollup" or non-periodic interval such as a calendar month, enter 00:00 and specify a calendar record in the CALENDAR RECORD field.** The ACTIVATE_TIME, which specifies the next time accounting records having the associated update period are to be processed. The rollup list record for that period. The ACTG_PRIORITY. (Remember an hourly interval needs to have a higher priority than a daily interval.) A "Yes" or "No" for ACTG_APPEND. "Yes" specifies that the "Description" will be appended to the accounting data record name. A DESCRIPTION or characters to add to the source point name. For example, add a ".hr" to the source point record's name for an hourly accounting record. The number of trend values in memory the accounting records will have. "On" or "Off" for the IP_ARCHIVING switch. The name of the history repository that the accounting records will go to. Create Association Defaults An association is a group of Accounting records, typically with different periods, that ultimately reference the same source record. With a properly configured association, all of the cascaded accounting records (i.e. hourly, daily, and monthly) are created at the same time. To create the default association, go to the record, "CreateAssoc" defined against CreateAssocDef. First, it is necessary to configure the repeat area, #ACTG_RECORDS. This is where the cascading takes place. By default Accounting.21 configures higher rollup periods to point to the next lower rollup period for input. That is, the monthly records look to the daily records for input. The daily records look for hourly records for input. In the ACTG_ROLLUP_PERIOD fields, enter the periods from the "ACCOUNTING PERIODS" record that you will need. For the ACTG_SOURCE_TYPE, enter for each period if the source data will be "RAW" or "ACTG" calculated data. For the ACTG_SOURCE_PERIOD, enter the period that will be used as the source. For the UPDATE_PERIOD, enter the period that will be used. In the fixed area of the CreateAssoc record: Choose a DESCRIPTION to be used as the extension to the source record name for this association. In this example, .assoc is used. Choose a GROUP from "Accounting Groups" to have the accounting records place in. Choose whether or not you will used TIME BASED. Choose an ACTG_DFLT_COND_REC for a condition record. See Solution 108173 for an understanding about condition records. If you are using folder records for organizational purposes, choose a folder record for the FOLDER_RECORD field. If needed, enter a divisor in the DIVISOR field. See Solution 108649 for an understanding about divisors. Creating Accounting Records Once the default information is entered, actual Accounting records can FINALLY be created! To do this, enter the name of a source data or "raw" data record in the field, RECORD_NAME. In this case, ATCAI is used. In the ACTG_CREATE field, enter YES. You should see a message in the field, ERROR? stating ATCAI.ASSOC CREATED. At this point, there are 4 new records created: ATCAI.ASSOC defined by Actg_AssocDef ATCAI.HR defined by AccountingDef ATCAI.DAY defined by AccountingDef ATCAI.MON defined by AccountingDef To activate TSK_ACTG, go to the record "Activate Accounting" defined by ActivateACTGDef and enter a "start" time in the SCHEDULE_TIME field. Accounting should now be running. **"Calendar" records If you have an interval that is not an exact interval all the time, such as a calendar month, Accounting uses records defined by ACTG_TimesDef. In this example, a calendar record called "Month" has been created. In the repeat area, specify the activation times of each interval. KeyWords: Keywords: None References: None
Problem Statement: The aspenONE Web application makes a request to all data sources configured in the ADSA to get a list of tags subscribing to alerts. If the same data source is configured twice in the ADSA (using different names) the user will see duplicate tags in the Subscriptions page. This Knowledge Base article provides steps to limit the list of tags in the Subscriptions page to a desired data source.
Solution: It is possible to specify Alerts application data source(s) in the AtProcessDataRest.config file. Only these data sources will be considered for Alerts application and other data sources configured in the ADSA will be excluded. Please follow the steps below to specify data sources that will be used by the Alerts application: · Navigate to C:\inetpub\wwwroot\AspenTech\ProcessData directory and open the AtProcessDataREST.config file in Notepad · Scroll down to the bottom of the file and look for the section titled AlertDataSources · Configure valid Alert data sources as shown below. In this example the name of a valid data source is MES. · Perform an IIS reset for changes to take effect AlertDataSources: Configure valid Alert data sources Add this to the AlertDataSources section for every alert data source supported <DataSource>DataSourceName</DataSource> ************************************************************ --> <AlertDataSources> <DataSource>MES</DataSource> </AlertDataSources> </SearchOptions> Keywords: References: None
Problem Statement: This Knowledge Base article provides some basic aspenONE Search Service troubleshooting steps to try when the Global Search in aspenONE Process Explorer (A1PE) is not working.
Solution: 1. Make sure aspenONEService.svc exists in the physical IIS Directory (C:\inetpub\wwwroot\AspenTech\aspenONE). Check the Windows Application log in the Event Viewer for an error – the aspenONEService will not run if there is not enough available memory. 2. To increase the initial and maximum memory pool in Tomcat Java settings, please follow KB article 000033607 - https://esupport.aspentech.com/S_Article?id=000033607 . Note: The exact path to the Tomcat8w.exe executable will depend on your version of Tomcat. 3. Make sure Anonymous authentication is being used for the aspenONE IIS Virtual Application (see below) 4. Make sure the Anonymous User can log in to a1PE (or you can try another service account that is working, e.g., one that runs the AspenProcessDataAppPoolx64 App Pool) 5. Make sure ISAPI and CFI Restrictions are enabled. Here is how: On the taskbar, click the Start icon and type Internet Information Services (IIS) Manager and then click Internet Information Services (IIS) Manager icon. In the Connections pane, click the server name. In the Home pane, double-click ISAPI and CGI Restrictions. In the Actions pane, click Edit ISAPI and CGI Restrictions Settings. Verify that both options are checked. 6. This URL (http://<web_server>/aspenONE/aspenOneSearch.svc) executed from client browser should return this type of response (This is a Windows Comunication Foundation service.) If all of these are ok, there could be a problem with the IIS Handler Mappings, e.g., SVC. Try the attached Web.Config, which should be copied to the C:\inetpub\wwwroot\AspenTech\aspenONE directory. Also, if the following error is seen in the Event Viewer: Could not load type 'System.ServiceModel.Activation.HttpHandler' this means that the HTTP Activation has not been enabled. To resolve the error, in Server Manager, select Manage in top right, then select Add Roles and Features and add the feature called: HTTP Activation. Additional KB articles to review: Global Search and Tag Search in A1PE don't work. Type Ahead from the Tag Input Line works. https://esupport.aspentech.com/S_Article?key=142321 After upgrading, why might the search page not load and get stuck on the waiting "spinner" icon? https://esupport.aspentech.com/S_Article?key=146524 What could cause the search page to not load and the waiting "spinner" icon spin continuously? https://esupport.aspentech.com/S_Article?key=140750 aspenONE Process Explorer Troubleshooting Guide https://esupport.aspentech.com/S_Article?key=000001409 Keywords: References: None
Problem Statement: Aspen KB article # 000033862 (https://esupport.aspentech.com/S_Article?id=000033862) shows how to integrate any object, such as a standard trend plot or an OEE Waterfall diagram, into an Aspen IP.21 Process Browser graphic and then display it in aspenONE Process Explorer. This Knowledge Base article answers the following question: How to open an A1PE trend page outside of a frame object or a graphic?
Solution: As stated in the above-mentioned KB article, an a1PE Trend Plot can be displayed within an iFrame of a graphic with a URL such as this: http://localhost/ProcessExplorer/WebControls/PBPlots.asp?outsidea1=graphic&tag=ATCP301 In order to open an A1PE trend page outside of a frame object, put that string in a window.open call, such as a button OnClick action, and add the target="_blank" attribute to the end of the link, as follows: window.open('http://localhost/ProcessExplorer/WebControls/PBPlots.asp?outsidea1=graphic&tag= ATCP301&target="_blank"'); … where the name of a tag to be displayed in the plot is ATCP301. Keywords: References: None
Problem Statement: Users of aspenONE Process Explorer (a1PE) may also want to make use of the Aspen Process Data Excel Add-in but would prefer to avoid installing other MES Desktop products. How can such users install Aspen Process Data Excel Add-in without using the standard installation media?
Solution: When installing the Web Server product the Aspen IP.21 Browser (Web21) site is installed along with the aspenONE Process Explorer site. On the home page of Web21 is a link to Process Data Excel Add-in download page: http://webservername/Web21/DownloadAddin.asp You can make use of this in a1PE. Create a new graphic in Aspen IP.21 Browser Graphic Studio, drop a Frame Object on it and set the URL parameter to point to that DownloadAddin.asp page. Similar can be done for the legacy Excel Add-in: http://webservername/Web21/PDAddin.asp The download page itself could be manually edited to remove external references such as the link to "Back" / "Back to IP.21 Browser". Important note: The user downloading and installing the Add-in must have administrative permissions on the target machine and in most cases, he/she needs to run the browser “As Administrator” to be able to see the content of the iFrame. Also, the user may need to manually create the ExcelAddin folder if it is not present on the target system. Keywords: ExcelAddinSetup.exe ExcelAddin.xml MES Excel Addins References: None
Problem Statement: This Knowledge Base article describes how to configure aspenONE Process Explorer (a1PE) to always start with a specific plot, a template or a tag list when first opening the Process Explorer page.
Solution: In order to have aspenONE Process Explorer page always open on a specific file, please follow these steps: · While on the a1PE Home page, right click on the Process Explorer icon and click the Configuration option · Select the ‘Open to specific file:’ radio button and then click the arrow to the right of the empty box below · Use the Search box to enter a file name or navigate to the Public or Private folder to find a file you would like the Process Explorer to open when clicking the Process Explorer icon on the a1PE Home page · Click Save to complete the configuration process Keywords: References: None
Problem Statement: How is the Ramp Steady-State target (SSRDEP) calculated?
Solution: The SSRDEP is computed using the "Gains" from the model and the open loop predicted slope of the ramp. To explain this concept let’s take a simple example of a tank with a level transmitter, flow going in (F_IN) and flow going out (F_OUT). In our example we have the output flow (F_OUT) as Manipulated variable, the level as Controlled variable and the input flow will be a feedforward. The model attached to this example is as follows: Now, for the level to be steady the mass balance enforces the condition that Fin = Fout. We call this the balance condition, translated to DMC language this balance condition is: LIslope + ModelSlope*(SSMAN-VIND) = 0 LIslope is the open loop prediction or the predicted slope of the ramp. Since there is no steady-state optimization for ramps, the controller will only try to keep the balance condition at steady state. The controller will always try to keep the balance condition so that the tank does not overfill or gets empty. To access the values of open loop prediction you need to create a Print file, this can be done online via Controller / General Details / Diag Print Switch. To create a print file in simulation and review the values in notepad go to “Application Details” navigate to the General/ Plant area and enable the counter for the Debug Print Counter. The *.PRT file will be saved in the following location: C:\ProgramData\AspenTech\RTE\V10\Clouds\Simulation Attached is the tank.dmc3application file for this example. Import the application and navigate to the simulation view, let the controller run for a few cycles until the Ramp reaches the Ramp Setpoint (50). Then introduce a change on the feed forward value F_IN from 50 to 55 and do a step simulation. Notice that the Output flow has a low limit of 15 and a high limit of 85 so its capable of rejecting this disturbance. The upper and lower imbalance limit show 0, the Ramp SS Target (SSRDEP) shows -6.519E-09 which for our example is cero. From the above test, the print file shows the following information: So we can calculate the slope as: LIslope = (89.5110-82.0089)/(79-64) = 0.50014 The balance equation is LIslope + ModelSlope*(SSMAN-VIND)= 0. The steady state target for the F_OUT MV should be: SSMAN = -LIslope/ModelSlope + VIND SSMAN = -0.50014/(-0.1)+50 = 55.0014 Notice that the PRT file does not show all the digits and thus the small difference in the simulation, the Steady State target shows 55.00 Now, let’s see what happens when there is an imbalance in the system. Let’s move the upper limit on F_OUT to 60. Then change the F_IN to 65, this is a clear imbalance scenario. To calculate the upper and lower imbalance limits use the Ramp Horizon. In DMC3 Builder the ramp horizon is expressed in minutes, for this controller is set at 20 mins. Current Slope: (129.1157-114.0933)/(79-64) = 1.001493 Upper Imbalance: (UDEPTG – DEP)/(RHORIZON) =(80-50)/(20)= 1.5 Lower Imbalance: (LDEPTG – DEP)/ (RHORIZON) =(20-50)/(20)= -1.5 The predicted slope is inside the “Safe cone” so no need to turn off the controller. In this scenario the SSMAN goes to the High limit on F_OUT, so the imbalance is: (129-114)/(79-64) + (-0.1)*(60-55)= 0.5 Since this is the first time the imbalance shows then the counter goes to 1, if we reach the maximum imbalances then the controller will be turned off. The attached example is using a maximum number of imbalance of 10 cycles. To make this ramp variable a program imbalance then change it to -1. Additional information is available in below KB article: What is Ramp Imbalance, Programmed Ramp imbalance and Ramp Horizon? https://esupport.aspentech.com/S_Article?id=000015408 Keywords: Ramp SSRDEP Imbalance References: None
Problem Statement: This Knowledge Base article provides an overview for Aspen Compliance.21 application.
Solution: Aspen Compliance.21TM Monitors the manufacturing process, recording key information to support compliance guidelines. Aspen Compliance.21 provides documentation of process excursions, defined consequences, and recommended corrective actions while monitoring the manufacturing process. It continuously monitors the manufacturing process, recording key information to support compliance guidelines. Aspen Compliance.21 is a core element of AspenTech's aspenONETM Production Management and Execution applications. It is delivered with Aspen InfoPlus.21 database. Features · Detects and reports deviations of actual values on a scheduled frequency from user-defined constraints and limits, alerting users to non-compliance situations · Continually monitors the manufacturing process · Records non-compliance time periods · Serves the operator with corrective action guidelines in response to excursions and states the consequences of deviation · Accepts entry of excursion reasons and records corrective action · Provides historical reports and includes the capability for scheduled reports on non-compliance of system, unit, line and item points, time-out of compliance, percent noncompliance, and more · Detects communication faults between the process and monitoring system · Allows disabling of compliance monitoring for manufacturing production changeovers and faulty instrumentation readings Related Products · Aspen Compliance.21 is tightly integrated with Aspen InfoPlus.21® Benefits · Provides clear indication when and where deviations from standards occur in the manufacturing process · Enhances the understanding of the manufacturing process by providing users with defined consequences of process deviations and recommend actions for excursions Keywords: References: None
Problem Statement: While Compliance.21 is being implemented, there may be a time when a number of records' values are hovering around their High or Low limit. In this case, it may be desirable to limit the number of reported limit violations by defining a CM_Delay_Time. This solution details the configuration needed to achieve that result.
Solution: CM_Delay_Time is an absolute time computed using one of the CM limit record's start/end delay parameters (CM_HI_START_DELAY, CM_HI_END_DELAY, CM_LOW_START_DELAY, and CM_LOW_END_DELAY fields found in the #_OF_LIMITS repeat area). If the limits defined for the CM record do not specify the delay parameters, the CM_Delay_Time field will always be undefined. The CM_Delay_Time represents the time after which an alert has gone in/out of the limit violation condition. This field is NOT user configurable. The following example shows how to reduce the number of reported limit violations. Let's say that Limit 1 establishes a delay of 20 seconds for the START of the HIGH limit condition (CM_HI_START_DELAY=000:00:20.0) and 10 seconds for the END of the HIGH limit condition (CM_HI_END_DELAY=000:00:10.0). So as soon as the HIGH limit value is violated, an OUT OF LIMIT event will be generated for the Compliance record and 20 seconds later a START EXCURSION event will be posted. Conversely, if the process value returns to normal, an END OF EXCURSION event will be generated and, if the normal value persists for 10 seconds, an IN LIMITS event will then be posted. Based on the above, you should configure the following fields: CM_HI_START_DELAY, CM_HI_END_DELAY, CM_LOW_START_DELAY, and CM_LOW_END_DELAY. The proper use of these fields is described on pages 4-29 through 4-31 in the Compliance.21 User's Manual. This should help reduce the number of reported limit violations. Additionally, you should create and enable an event log record using CMEventLogDef record. This will help keep track of excursions with start/end delays. The proper use of this record is described on page 4-16 in the Compliance.21 User's Manual. Keywords: References: None
Problem Statement: How to switch from one form of record activation to another (i.e. From Triggered to Scheduled Monitoring or from Scheduled Monitoring to Triggered)
Solution: The attached SQL queries can be safely used by a site that wishes to switch from one record activation method to another in Aspen Compliance.21. These two programs make sure that all CMGroupDefs have at least 1 occurrence in the repeat area #_LIMIT_RECORDS and that the field DOMAIN in record CM-SYSTEM is System and not ..UNASSIGNED.. KeyWords: record activation triggered scheduled monitoring Keywords: None References: None
Problem Statement: What does the error "CKLTHI:vdPutExtremeVal() err -61" mean?
Solution: This message, appearing in TSK_CMON.log, means the subroutine vdPutExtremeVal() encountered an "Invalid Key Time Stamp" error when writing an event to an Aspen InfoPlus.21 history repository. Solution 121668 describes how to correct "Invalid Key Time Stamp" errors. When looking for Aspen InfoPlus.21 API error codes (like -61), try searching for the error code in the header file "C:\Program Files\AspenTech\InfoPlus.21\db21\include\setcim.h." Keywords: References: None
Problem Statement: In order to delete an AccountingDef record, you need to first make it to unusable. When doing this, you get an error: Invalid Value in Field TREND_VALUE_FIELD. What does this mean?
Solution: In a record defined by AccountingDef, there are three record pointer fields: TREND_VALUE_FIELD, TREND_TIME_FIELD, and TREND_QUALITY_FIELD. These three fields are populated with the "source data" for the AccountingDef record. For example, for an AccountingDef record called ATCAI.HR, the source data for the three fields might be ATCAI 1 IP_TREND_VALUE, ATCAI 1 IP_TREND_TIME, and ATCAI 1 IP_TREND_QSTATUS. In order to make ATCAI.HR unusable, the references to ATCAI 1 IP_TREND_VALUE, ATCAI 1 IP_TREND_TIME, and ATCAI IP_TREND_QSTATUS.need to be removed. To actually remove the references, the "ACTIVE SW." field needs to be set to OFF and then the three fields can be "blank spaced" out. Then the record can be made unusable. Keywords: References: None
Problem Statement: The monthly Aspen Accounting.21 rollups stopped activating.
Solution: Remove past occurrences from the calendar record #SCHEDULE_TIMES so there are less than 100 - TSK_ACTG only recognizes the first 100 occurrences Stop TSK_ACTG ran ACTG_SYNC and restart TSK_ACTG Set the ANY_DEMANDS field to Update Now to activate the missed rollup Note: Calendar records defined by ACTG_TimesDef is used to schedule Aspen Accounting.21 record activations at a specific date and time. The activation times are defined in the ACTIVATE_TIME field of the #SCHEDULE_TIMES repeat area. Because TSK_ACTG only recognizes the first 100 occurrences It's good practice when using Aspen Accounting.21 calendar records to occasionally remove past occurrences. Keywords: References: None
Problem Statement: ACTG.EXE is not using any CPU and is not performing any updates or rollups.
Solution: Check the record, "Activate Accounting", defined by ActivateACTGDef. Verify that there is a timestamp in the field SCHEDULE_TIME. If the field contains question marks, "?????", re-enable the field by entering a current timestamp. After entering the current timestamp, this time will get reset to the next activate time of shortest period in the "Accounting Periods" record. When that time occurs, ACTG.EXE should kick in and start processing. KeyWords ACTG.EXE updates rollups Keywords: None References: None
Problem Statement: Is it possible to filter Aspen Accounting.21 calculations? In other words, is it possible to not calculate statistics under certain conditions? For example, you may want to not include the raw data values in the Accounting.21 calculations when a particular valve is turned "OFF".
Solution: There is not a way to do this with Aspen Accounting.21 by itself. However, it is possible with a combination of Aspen Accounting.21, Aspen SQLplus, and InfoPlus.21 COSActDef records. A Change of State detection record, COSActDef, could watch for the state of the valve. When the valve changes from "ON" to "OFF", an SQLplus query can be activated. This query can turn appropriate AccountingDef record(s) to "OFF". When the AccountingDef record is turned "OFF", a rollup to that point in time occurs. While "OFF", the AccountingDef record(s) will not accumulate data. Another COSActDef can monitor the opposite status of the valve, and turn the AccouningDef record(s) back "ON". When the AccountingDef record(s) is turned back "ON", a new period with undefined values, "?????", will be created in the history repeat area. The RESET_PERIOD_BASE from which future calculations begin, will have the timestamp of when the record was turned to "ON". KeyWords Keywords: None References: None
Problem Statement: Error creating AccountingDef records using the CreateAssoc record either through InfoPlus.21 Administrator or Aspen IP.21 Process Browser: Incorrect Buffer Size
Solution: The problem is that the source record name is too long. Once the extension is appended to the source tag name, the new AccountingDef record's name will be more than the allowed 24 characters. For example, if the source record is "ThisIsMySourceRecName" (21 characters) and the extension to be added to the record is ".hour", the resulting AccountingDef record name will be "ThisIsMySourceRecName.hour" (26 characters). KeyWords Keywords: None References: None
Problem Statement: What is the "Divisor" used for in Accounting.21?
Solution: The "Divisor" is used as the factor to make a "total" come out in the correct units. The number used is equal to the number of tenths of a second in the time base of engineering unit. For example: Units/day: 864,000 Units/hour: 36,000 Units/min: 600 Units/sec: 10 If the source value is measured in lbs/hr, but Totals are measured in lbs, the Divisor is used to correct the Total to lbs. In this case, the Divisor is set to 36,000 since there are 36,000 tenths of seconds in 1 hour. If the source units are lbs/min, set the Divisor to 600 to obtain Totals in lbs. KeyWords: divisor Keywords: None References: None
Problem Statement: This Knowledge Base article provides troubleshooting steps to resolve the following scenario: "Some of my Accounting records are rolling up and others aren't." Why are some working and others not?
Solution: One possible reason is that the field, ACTG_QUALITY_CON_REC, in AccountingDef records, is either not populated or it is populated with the wrong condition record. The field, ACTG_QUALITY_CON_REC, is used to hold a condition record, defined against ConditionRecDef. This condition record is used to tell the AccountingDef record if the value from the source data record is of good or bad quality. "Bad" quality values are not used in Accounting calculations. There are 2 "default" condition records in circulation: ACTGQuality-Statuses and ACTG_D-QUALITYCon. ActgQuality-Statuses "throws out" any integer conditions except for 0, -1, and -2. These integer values correspond to the qualities of "initial", "good" or "no status" (from the Quality-Statuses record) for the "raw" source data. ACTG_D-QualityCon "throws out" all quality statuses except 0, which correlates to a quality of "initial". If you are using the condition record, ACTG_D-QualityCon, and you have a quality of -1 ("good") in your "raw" source data record, this value is not included in the Accounting calculations. If you are continually receiving quality statuses of -1, it will appear that your Accounting is not working. In most cases, customers should be using the condition record, ACTGQuality-Statuses. Of course, this condition record can be modified to accommodate individual site needs. If Accounting records are cascaded, once data is in the "lowest" level Accounting record, there is no need for additional condition checking at upper levels of Accounting records. For example, if "raw" data is feeding an hourly Accounting record and that hourly record is feeding a daily record, there is no need to "condition check" in the daily Accounting record. KeyWords: condition record rollup Keywords: None References: None
Problem Statement: An Accounting rollup may need to be recalculated because at the time of the rollup, enough data may not have existed to calculate the correct value. Once the historical data that the Accounting record uses to calculate the rollup exists in the source record, the occurrence can be recalculated.
Solution: In the InfoPlus.21 Administrator, search for the Accounting record you wish to modify. Find the occurrence number in the history repeat area. Go the the ACTG_MODIFIED_SEQ_# field and enter the history sequence number of the occurrence that you wish to recalculate. Then set the ACTG_RECALC_ROLLUP flag to YES. That history occurrence will now be recalculated. KeyWords Keywords: None References: None
Problem Statement: What is the correct tube input for a Twin Box Firebox?
Solution: For a cabin firebox, you should enter the number of tubes against one wall; hence the default Tube Passes defined for a One Cabin Unit is 2. When Fired Heater is defined as a Twin Box, the Tube Passes are defaulted to 4; hence the total number of tubes will be 4 times of tubes against one wall. For the roof tubes it is assumed the unit is symmetrical so you enter one side of the gas off take. So, for a twin cabin fire box, the number of roof tubes will be 4 times of the tubes on one side of gas off take. Please see below for the graphical representation of the tubes in twin cabin fire box: So, if we were to specify 6 tubes on the Roof Rows and 10 tubes on the Main Rows, the correct tube count would be (6 + 10) x 4 = 64 tubes. Keywords: Fired, Heater, twin, cabin, firebox References: None
Problem Statement: There may be a need to have existing Aspen Accounting.21 history Rollups recalculated.
Solution: The attached zip file contains 2 Aspen SQLplus queries. Acc21_Recalc_IDS.SQL is run first. It returns the record ID's of the AccountingDef records in the order in which they will be processed by the 2nd query, Acc21_Recalc.SQL. This query performs the actual recalculation of all of the rollups starting with a specified timestamp until the most recent rollup. It does require some manual input. Locate these 3 lines of code in the query and make the appropriate modifications: ReCalcStartDate = '01-Nov-06 22:15:00.0'; WaitTime = 1; -- Seconds START_RECID = 0; -- RECID to start from The ReCalcStartDate needs to contain the starting timestamp for the actual recalculation. The WaitTime is the number of seconds to wait after an occurrence has been recalculated before continuing with the next occurrence or record. It should be a value equal or greater than 1. The START_RECID is the record ID of the AccountingDef record to start recalculating from. The default value is zero, which indicates that recalculation will be done for all AccountingDef records. If a value greater than zero is specified, only that accounting record and those records that follow it will have their occurrences recalculated. As the query processes the various AccountingDef records, there will be output in the SQLplus Query Writer indicating such. It also indicates when the query is finished processing. Example output: 06-NOV-06 09:39:05.7, [4372], AAA.15min 06-NOV-06 09:39:09.0, [3594], ATCAI.15min 06-NOV-06 09:39:12.4, [3595], ATCAI.hr 06-NOV-06 09:39:13.5, [4373], AAA.hr 06-NOV-06 09:39:14.7, [3596], ATCAI.day 06-NOV-06 09:39:15.8, [4374], AAA.day 06-NOV-06 09:39:16.9, [3597], ATCAI.mon 06-NOV-06 09:39:18.0, [4375], AAA.mon 06-NOV-06 09:39:19.1, Finished This query will only recalculate existing history occurrences. It will NOT calculate and insert new history occurrences, nor will it delete history occurrences. Important note 1: The recalculation query, Acc21_Recalc.SQL can take a LONG time to run!! If there are days, weeks, or months of recalculation required, the query can take hours or days to run!! So that the query does not time out in the Aspen SQLplus Query Writer, set the timeout value to indefinite by going to QUERY | TIMEOUT and setting the value to 0. Important note 2: To avoid doing months before hours, the Reschedule_Interval > 0 in both queries here for the first pass, and then reschedule_interval = 0 for the second pass. (The queries are designed to operate on more than one historian so they will need to be tweaked to operate on one historian.) FOR (SELECT ACTG_RECORD as "ActgRecord" FROM ACTG_ListDef WHERE NAME Like ListRec AND "ActgRecord"->Reschedule_Interval > 0 ORDER BY "ActgRecord"->Reschedule_Interval) Keywords: None References: None
Problem Statement: Is there an example showing how to implement a user equation of state?
Solution: See the attached example files for Aspen Plus v10. You can find the description of the arguments in the User Model Keywords: ESU, ESU0, user model, fortran References: guide. Note that you must have a supported fortran compiler (Intel Fortran) to compile and link the user source code into a dynamic link library. The first example, eos-idea.zip, is expanding on the subroutine template which is provided in Aspen Plus installation folders. It implements the ideal equation of state. It may be a good example to get started. If you have an existing program with the equation of state, the recommendation is to convert the source code of your program in a subroutine which can be called from this user equation of state. This will allow you to test your equation of state independently of Aspen Plus, and once you are satisfied it is working correctly, to have to worry only about the "plumbery code" to wrap your equation of state into the routine that is expected by Aspen Plus. The second example, eos-vdw.zip, implements the van der Waals equation of state. This is not meant to be used for simulation, as it is less accurate than even old classical cubic equations of state such as Peng-Robinson. The example illustrates how to access pure component parameters (such as the critical temperature and pressure). It also illustrates one potential problem, which is the resolution of implicit equations (p = f(T, v)). Here we use the Newton's method, with initial value for the volume to hopefully get the liquid or the vapor root as needed.
Problem Statement: The Compliance.21 Compliance Report shows all of the excursions and corrective actions taken for Standard Operating Condition (SOC) items that exceed one or more SOC parameters that are flagged for monitoring. In addition, this report calculates the Compliance percentage and the number of hours that an item or items are out of compliance for a pre-configured period of time. The "Aspen Compliance.21 User's Manual" explains the configuration of the report along with how the calculation results are derived. There have been occasions when the Compliance Report shows the total hours out of compliance for a Compliance tag to be greater than the reporting period. For example, a 12 hour report is showing a tag's Total Time out of compliance to be something like 60 hours. How does this happen? Is this a problem? What do you do?
Solution: The problem is caused by "old" open excursions. A CMLimitDef record is configured with limits for each tag where you want to monitor deviations or non-compliance. When one of these limits is violated, an "excursion" is opened in this same CMLimitDef record. When the value for the tag returns to its normal value or back into compliance, the excursion is closed. The Compliance report looks at the CMLimitDef record's history for a pre-configured period of time and calculates how many times the limit(s) was/were violated and the period of time for the violation. When the Compliance report reports a tag out of compliance longer than the report's time range, this means that there are excursions still open from before the reporting period. Below is an excerpt from a Compliance Report. Many of the columns in the report have been cut out as they are not needed for this discussion. In this report, there are 2 CMLimitDef records listed that have a Total Time Out longer than the report's 12 hour period: CM2730PC and CM7010PI. In CM2730PC, there is 1 excursion in history and this was a 6.50 limit value violated by the value of 7.70. In CM7010PI, there are 5 excursions in history where the limit of 20 was violated. The last one was a value of 21.0. In the record, CM5010PC, the Total Time Out is 7:32 indicating that the start of the excursion occurred during the 12 hour reporting period. Compliance Monitoring Report CM-System - SGAUTL DOMAIN PERIOD: 10-AUG-05 19:00 to 11-AUG-05 07:00 Keywords: None References: None
Problem Statement: The Aspen Compliance.21 User's Manual explains how Compliance reports are configured and activated. Using CMReportDef and ScheduledActDef records, you can, for example, have a 24 Hour Compliance Report such as CMDailyReport.txt, every morning. Windows operating systems do not inherently support file versioning. But, there is a way to have multiple versions of Compliance Monitoring reports such as: CMDailyReport.txt.1 on day 1 CMDailyReport.txt.2 on day 2 CMDailyReport.txt.3 on day 3 etc.
Solution: Using the RGD (Report Generation and Distribution) piece of Human Interface, consisting of TSK_DRPT and TSK_SRPT, it is possible to have multiple file versions of Compliance Monitoring reports. This is done by appending a monotonically increasing numerical extension to the traditional report filename. The number of file versions retained is configured on a per report basis. The attached Word document provides DUMPRECs of the records necessary to configure versioning. These records include an example CMReportDef, ReportAddressDef, and TSK_SRPT & TSK_DRPT external records. KeyWords report multiple version versioning Keywords: None References: None
Problem Statement: You just created a new CMLimitDef record and configured some limits. You take the source record's values to a level where a high or low limit should have been violated in the CMLimitDef record and an excursion should have been opened. But, nothing happened!
Solution: Try this "jump start" approach. In either the CMLimitDef record or in the CMGroupDef record, enter a text string in the DISP_DISABLE_REASON field (such as "testing" or "jump start"). Enter NO in the ENABLE_MONITOR field. Then enter YES in the ENABLE_MONITOR field. This should add an occurrence to the repeat area, #_OF_ALERTS_IN_MEM, of the CMLimitDef records with a HIST_ALERT_STATE of MON DISAB. Now test again by taking the source record's value to a level where the high or low limit is violated and see if an excursion has opened. If so, Compliance is working. If not, call Support! KeyWords Keywords: None References: None
Problem Statement: Aspen Compliance.21 reports show excursions and corrective actions taken for SOC Items exceeding one or more SOC Item parameters flagged for monitoring. If these Compliance.21 Reports are outputting the column headers but not the Compliance information, there are some configuration issues to be checked.
Solution: Things to check: 1. Verify that CMLimitDef records that should appear in the report are grouped in a record defined by CMGroupDef. 2. Verify that the CMGroupDef record is named in the domain record in the Compliance.21 domain hierarchy. 3. Verify that the RECORD_NAME field in the CMReportDef record is configured with either CM-System, a domain record, a group record, or a CMLimitDef record where you want the report to start reporting from as its top level. If using CM-System, make sure that the DOMAIN field in the CM-SYSTEM record is pointing to the top level "system" domain. KeyWords: report domain Keywords: None References: None
Problem Statement: Sometimes, the display of graphics on AspenONE Process Explorer will be too wide or too narrow to view. This article explains how to use AspenONE Process Graphic Editor to adjust the display properties of your published or unpublished graphics, to make it look better on A1PE,
Solution: First, user needs to open the graphic which he/she is working on in Aspen Process Graphic Editor. By right-clicking on any blank area of the graphic and selecting 'Properties,' the window of 'Display Properties' will pop out. Then user has to tune the height and width to a proper ratio, in order to get a better size of the graphic. It's suggested to start with an magnitude of 3 -10 units and publish to A1PE every time to check if user is satisfied with the final display on A1PE. KeyWords Graphic A1PE Graphic Editor Size Keywords: None References: None
Problem Statement: How do you get the Selection Value for a given SELECT_DESCRIPTION in a Selector record?
Solution: Since the Selection Value is a calculated field, you will need to calculate it as follows: 1st_SELECTION_VALUE + OCCNUM (of the given SELECT_DESCRIPTION) - 1. Here are two SQLplus query examples: local x; x = (select "1st_SELECTION_VALUE" from calc_error_type) + (select occnum from calc_error_type.1 where select_description = 'Unknown Error') - 1; write x; select "1st_selection_value"+occnum-1 from calc_error_type where select_description = 'Unknown Error'; KeyWords occurrence number calculation select value Keywords: None References: None
Problem Statement: Daylight saving time presents a problem to Aspen SQLplus when retrieving data from the AGGREGATES table. The day has only 23 hours when moving from standard time to daylight saving time. Likewise, the day has 25 hours when moving from daylight saving time to standard time. Aspen SQLplus skips the hour between 2:00 AM and 3:00 AM when retrieving AGGREGATES calculations on the day that daylight saving time starts and when the period is less than two hours. In a similar fashion, Aspen SQLplus repeats time stamps between 2:00 AM and 3:00 AM (but with different data) on the day that daylight saving time ends for periods less than two hours. When requesting AGGREGATES calculations for periods greater than or equal to two hours , Aspen SQLplus can either adjust the first interval by one hour and maintain expected time stamps or adjust the time stamps and use constant intervals.
Solution: The parameter DSADJUST controls AGGREGATES calculations around daylight saving time transitions when the period is greater than or equal to two hours. When 0, DSADJUST causes Aspen SQLplus to not adjust the first interval of the transition day and to adjust time stamps instead. When 1, DSADJUST causes SQLplus to adjust the first interval of the transition day and maintain natural time stamps. The default value for DSADJUST is 1. DSADJUST has no affect for periods less than two hours. As an example, daylight saving time in the United States started at 2:00 AM on March 10, 2013. The tag WBBTest4 has one minute trend values. The value of each trend value equals the minute of the day. Since the clock shifted forward one hour at March 10, 2013, 2:00 AM, the value for WBBTest4 at March 10, 2013, 3:00 AM is 120 as shown below. In the example below, the query requests four hour averages for March 10, 2013. Since DSADJUST is 1, the AGGREGATES table returns a three hour average from 00:00:00 to 00:04:00 (because the clock moved forward at 2:00 AM) and four hour averages thereafter. In the next example, the query returns a four hour average from March 10, 2013, 00:00:00 to 05:00:00 (because the clock moved forward at 2:00 AM) and then four hour averages thereafter since DSADJUST is 0. KeyWords: Keywords: None References: None
Problem Statement: How to close multiple windows in PIMS?
Solution: Open the T. Assays in PIMS. We may open multiple windows while we are using PIMS, for example every time when you open a new solution report, PIMS will generate a new window for you. This is for the convenient purpose that you can quickly navigate to your desired session by switching the windows in PIMS. However, sometimes it is hard to find the desired window when you opened too many windows. In this scenario, you can use Close all windows button to quick close all the opened windows simultaneously. There are two ways to use Close all function: Go to Window | Close All in the menu bar, click the Close All button to close all the opening windows The other way is go to tool bar and find the Close all button, click the button to close all the opening windows. Keywords: Close All PIMS Windows References: None
Problem Statement: Can I continue developing 32-bit Aspen InfoPlus.21 applications when running Aspen InfoPlus.21 on a 64-bit platform?
Solution: Aspen InfoPlus.21 running on a 64-bit platform supports applications developed in a 32-bit environment; however, AspenTech does not provide a 32-bit development environment for Aspen InfoPlus.21 on a 64-bit platform. Customers wishing to continue developing 32-bit applications for Aspen InfoPlus.21 must maintain a separate 32-bit development system and move the application to the 64-bit platform. The best long term solution is to convert the 32-bit development environment to 64-bit. Keywords: develop application run application external task application program API C C++ References: None
Problem Statement: How does Aspen InfoPlus.21 determine which file set to shift into?
Solution: First Aspen InfoPlus.21 checks if there are file sets with a status None. These are empty file sets. If there are empty file sets, the repository will shift to the empty file set (of status None) with the lowest number. Secondly, if there are no empty file sets, the repository will shift into the oldest file set not marked as Reserved, overwriting the data in that file set. Finally, if all other file sets are marked as Reserved, the historian process for the repository will write an error message to the file error.log located in the root folder of the repository and then exit. At this point, Aspen IntoPlus.21 sends unprocessed historical data to the file event.dat located in the root folder of the repository. If this happens, either unreserve some file sets or add new filesets and restart the repository. The repository will complete the file set shift and unbuffer the data in the overflow file event.dat. KeyWords fileset status shift Keywords: None References: None
Problem Statement: How can I select the most recent trend value and time of a tag prior to a given time?
Solution: Use a subquery with the MAX function to find the most recent trend time prior to the given time. Then select the trend time and value based on the results of the subquery. The following example illustrates this technique: select ip_trend_time, ip_trend_value from ATCL101 where ip_trend_time = (select max(ip_trend_time) from ATCL101 where ip_trend_time < '02-FEB-18 12:00') Keywords: References: None
Problem Statement: How can I split a long EXECUTE or EXEC command over several lines?
Solution: Place each part of the EXECUTE or EXEC command on a separate line enclosed in single quotes. Do not place a comma between each segment. For example, the query EXECUTE 'CREATE INDEX a ON b(x, y) ' 'TYPE IS HASHED' ON p; splits the command EXECUTE 'CREATE INDEX a ON b(x, y) TYPE IS HASHED' ON p; onto two lines. Keywords: multiple lines two lines References: None
Problem Statement: The Aspen InfoPlus.21 standard tag set has a field called IP_ARCHIVING. The options for this field are "ON", "OFF", and "PAUSE". What is the difference between "PAUSE" and "OFF"?
Solution: "PAUSE" suspends history storage for a tag but still allows the viewing of history already collected. To resume history storage, set the field IP_ARCHIVING to "ON". Setting IP_ARCHIVING to "OFF" suspends history storage and also makes history already collected for the point unavailable. To resume history storage and to make previously collected history available again, set the field IP_ARCHIVING to "ON". Toggling IP_ARCHIVING from "OFF" to "ON" causes Aspen InfoPlus.21 to synchronize the in-memory history repeat area for the tag with what is stored in the tag's history repository. Keywords: IP_ARCHIVING OFF PAUSE References: None
Problem Statement: How to synchronize the scrolling in all the report that are currently displayed.
Solution: Sometimes when we want to compare the information in two or more reports, especially for some information only exist in the FullSolution report which case comparison is not enough for us. Then we can use the Synchronize scrolling function provided by PIMS. Synchronize scrolling function can let you synchronize the position of open text bars when using the scrollbar or cursor keys. In other words, if you have multiple reports displayed you will synchronize the scrolling in all the reports that are currently displayed. We can activate this function by clicking the Synchronize Scrolling button: This function could be used combined with split windows function, which is very convenient to compare the information between multiple reports. Keywords: PIMS Report Synchronize Scrolling Case comparison References: None
Problem Statement: In the PIMS, we can use T.CAPS to define the capacity for your submodels and use T.Proclim to define the process limit of your submodels. Both of them require us to define a corresponding row in the SXXX table. T.CAPS need us to define the CCAP row and the T.Proclim need us to define the Zlim row. What is the main different between these two functions.
Solution: The main different between T.CAPS and T.Proclim is the calculation method. T.CAPS is controlling the summary, and the T.Proclim is controlling the average. For the T.CAPS, the CCAP row is doing the summary function, for example, the capacity of SNHT shown like below: PIMS will generate the below structure in the matrix from above CCAP row: and if we write the equation out, it should be: -1.000000 * CCAPNHT1 +1.000000 * SNHTMN11 +1.000000 * SNHTDCN1 = 0.000000 Obviously, it is a summary function to calculate the capacity. However for the T.Proclim, when you define a MIN and MAX value for a Zlim row, PIMS will generate 3 rows in the matrix. For example, ZSULCFP in SCFP: according to the above rows, PIMS will generate the below structure in the matrix: If we write the equation for GSULCFP1 out (which is the MAX limits for the ZSULCFP), it should be: 3.000000 * ZSULCFP1 -1.000000 * SCFPLV11 * QSULLV11 -1.000000 * SCFPLV21 * QSULLV21 -1.000000 * SCFPHV11 * QSULHV11 -1.000000 * SCFPHV21 * QSULHV21 -1.000000 * SCFPDCG1 * QSULDCG1 -1.000000 * SCFPAR21 * QSULAR21 >= 0.000000 Combined with the ECFPSUL1 equation: 1.000000 * ZSULCFP1 -1.000000 * SCFPLV11 -1.000000 * SCFPLV21 -1.000000 * SCFPHV11 -1.000000 * SCFPHV21 -1.000000 * SCFPDCG1 -1.000000 * SCFPAR21 = 0.000000 We can easily find that the MAX =3 is applied to an average value but not the summary. Keywords: T.CAPS T.Proclim Matrix References: None
Problem Statement: Will data collection stop if Aspen APC server loses connection to License server??
Solution: Aspen DMC+ Collect and Aspen DMC3 Builder Collect will continue to collect data if the Aspen APC online server loses connection to license. The data collection process is not impacted by losing license. Aspen Watch collection using miscellaneous tags will continue to collect even if license is lost. Aspen PID watch collection and controller collection is impacted and data will not get collected for the time license is lost on the aspen watch server. Key Words Data collection DMC+ DMC3 Builder Keywords: None References: None
Problem Statement: This knowledge base article explains why the "Use mathematical system" check box associated with the Aspen Advisor Least-Squares Reconciliation (ALSR) Calculation Engine which is located on the Advisor Expert System Preferences dialog options screen is always checked by default each time the dialog screen is opened. IMPORTANT: If the ALSR component is not installed during the Aspen Advisor Installation then all the options and preferences located below the "Use mathematical system" check box as shown in the screen shot below will be grayed out and disabled.
Solution: This is intended by design and is the correct functionality. Although the "Use mathematical system" option is always checked each time the Expert Preferences dialog options screen is closed and reopened, in order for the ALSR engine to actually be executed the user must either: 1.) Manually select the "Resolve" reconciliation option during the Initialize-Reconcile-Save (IRS) process. 2.) Must have the "Automatically resolve after reconciliation" option also checked when executing the IRS process. 3.) Or the "Resolve" reconciliation option would have to be executed automatically from a batch script call. To see examples of the batch script calls that can be used in a batch script to automate the execution of the ALSR processes please review the knowledge base article referenced below: 121910: Can the ALSR Processes be scripted to run automatically from a batch script? Keywords: Expert System Preferences Use mathematical system References: None
Problem Statement: What is the maximum filename length for an Aspen DMC+ Collect file?
Solution: The maximum filename length allowed for Aspen DMCplus Collect file is 15 characters. Keywords: DMC+ Collect Filename size References: None
Problem Statement: What is the maximum filename length for an Aspen DMC+ Collect file?
Solution: The maximum filename length allowed for Aspen DMCplus Collect file is 15 characters. Keywords: DMC+ Collect Filename size References: None
Problem Statement: How can I save a CCF snapshot, from the PCWS, for larger controller without timeout?
Solution: A possible resolution is to manually increase the ASP Script Timeout for the ATControl website to 240 or higher . Close the web server application. Go to Internet Information Services manager, expand machine name and then default websites. Expand AspenTech, then select ACOView, and click on ASP. Navigate to expand Behavior section and go to the Limits Properties section and reset the Response Queue timeout and Response buffering limit to a desired number (like for example 3 minutes), then click on apply. Open a new instance of the web server application. Keywords: CCF Snapshot What-if Simulation Aspen Watch References: None
Problem Statement: What is the purpose of the "Legacy" database option as a source database for the Aspen Batch Extractor and how is it used?
Solution: The Aspen Batch Extractor needs to connect to a database via sql calls. So, in general, whether Legacy (also known as buffered) or not, the Extractor will need to be able to use an OLE DB connection. Most batch execution databases reside on MS SQL Server or Oracle, so the Aspen Batch Extractor can use its normal querying method. However, some databases (such as I/A Batch for example) do not allow for these types of queries because the OLE DB connection only supports a minimum of the core SQL commands. The legacy or buffer flag can be used such that the Extractor uses simple SQL calls to the source database and "buffers" the SQL data in temporary-like tables in the local configuration database. The Extractor then uses its normal querying method against the temporary-like tables. Once the Extractor is done with the local buffered data, the data is deleted from the temporary-like tables. Keywords: None References: None
Problem Statement: This knowledge base article explains how to get a list of recently added tags.
Solution: There is nothing in the InfoPlus.21 database that directly tells you when a tag was created; however, you can infer that information if the tag has a history repeat area. When you assign a repository to a tag and activate archiving, InfoPlus.21 calculates the oldest time you can insert data into history using the following algorithm: If the tag's record ID has never been used, then the oldest history insertion time is set as the current time minus the past time parameter of the repository. So, if the past time parameter of the repository is 365 days, and you turn on archiving for a tag on 02-JAN-18 11:00:00, then the oldest history insertion time for the tag will be 02-JAN-17 11:00:00. If the tag's record ID belonged to a point that was deleted earlier, then then tag's record oldest history insertion time is set to the maximum of the calculation defined above or the time of the previous point's last historical recording plus a microsecond. So, assuming your tag has a history repeat area, that you have not recycled a record ID, and that you have not changed the past time parameter of the repository, then you can use a query similar to the one attached to this article to calculate record creation times. The attached query will find all tags defined by ip_analogdef created in the past week. Keywords: Xoldestok New tags Find new tags References: None
Problem Statement: This knowledge base article acknowledges AspenTech's Support of the Aspen Operations Reconciliation and Accounting (AORA) Software when installed in selected Virtual Environments, and also explains the Virtualization Requirements for installing AORA Apps on Virtual Systems.
Solution: -- AspenTech only Supports the Virtualization of the AORA Application Software on the specific Virtual Platforms we tested and certified with for each major software release. -- All Supported Virtual Platforms as specified for each aspenONE Software Release are now documented and listed in the Platform Support Documents, which are published and made available for all Customers to download as needed from the web location referenced by the link provided below. https://www.aspentech.com/platform-support NOTE: When viewing the Platform Specification docs provided at the above web link, please note that the supported platform specs for AORA are documented in the "Manufacturing Execution Systems (MES)" specification documents (1 for each supported aspenONE Release), and the corresponding specs are those which show to be "checked" as displayed under the "Plant Operations" Category/Family for the associated specs listed down the left-hand side of the document. ** Both AORA and ATOMS (Aspen Tank and Operations Manager) belong to the "Plant Operations" Family, thus the Supported Virtualization Specs you confirm for AORA are also directly applicable to ATOMS Installations. -- And when Virtualizing the AORA Software, Customers should also insure that the Operating System(s) selected for the HOST Platforms to be installed and utilized to Host the Virtual Machines which are to be used for the AORA Server/Client Installations also adhere to the Supported Platform Specifications documented for the aspenONE Software Release to which the Customer will be Upgrading to or Installing. Keywords: Platform Support Specifications VPC Virtual PC Virtualization VM References: None
Problem Statement: This knowledge base article describes how the Aspen Operations Reconciliation and Accounting Expert System's Least-Squares Reconciliation (ALSR) processes can be scripted to run automatically from a batch script.
Solution: As of the 2004.2.1 release of Advisor (now referred to as Aspen Operations Reconciliation and Accounting), enhancements were made to the Batch Scripting Code to allow the AORA Expert System to support the New Publishing mechanism. In addition, related enhancements were made to the scripting language to enable the setting of the ALSR (Advisor Least-Squares Reconciliation) parameters from the batch script language to be used in automated batch scripts. The following batch scripting extensions were implemented: FOR PUBLISHING Expert <model_name>, PUB; EXPERT / MATH SYSTEM PARAMETERS: Expert < model_name >, SETOPTION, <option_name>, <option_value>; Where the following table describes the New ALSR Parameters, which can now be used in batch scripts to Automate the ALSR "Resolve" Process Executions: Option Name Description Allowed Option Values ASRCSOLVER ASRC Solver DMOQP, DFLT FORCEMASSREC Force Mass Basis TRUE, FALSE IMBALBOUNDS Set Imbalance bounds OFF, ON, IMBOBJPREFACT Imbalance Prefactor <number> MAXITERATIONS Maximum Iterations <number> MSMTBOUNDS Measurement Bounds OFF, ON, ONRLX, ONHARD OBJCNVGTOL Objectinve Conv. Tot. <number> OBJERRTERMS Objective Error ABS, REL, DEF OBJSCALEFACT Objective Scale <number> RESCNVGTOL Residual Conv. Tot. <number> RESLVRECON Resolve after Reconciliation TRUE, FALSE SOLVEMODE Solver Mode SLVBAL, DERROR, MINIMBAL TRACELEVEL Tracing Level LOW, HIGH, NONE, STANDARD USEMATH Use Math System TRUE, FALSE NOTE: When an ALSR option is not specified in a batch script, the Math System will default to the saved options, as defined by the "Hard-Coded" system defaults, which are reloaded at "run-time" each time the user logs into the main application GUI (Advisor.exe) and then Load the Expert System Menu. Example Batch Script to Minimize Imbalances using the ALSR: setlogfile "C:\Advisor ALSR Logs\MinimizeImbalances.log"; OpenConsole; Expert Model_Name, SETOPTION, USEMATH, TRUE; Expert Model_Name, SETOPTION, SOLVEMODE, MINIMBAL; Expert Model_Name, SETOPTION, ASRCSOLVER, DMOQP; Expert Model_Name, SETOPTION, RESCNVGTOL, 1e-006; Expert Model_Name, SETOPTION, OBJCNVGTOL, 1e-004; Expert Model_Name, SETOPTION, OBJSCALEFACT, 1; Expert Model_Name, SETOPTION, IMBOBJPREFACT, 0; Expert Model_Name, SETOPTION, FORCEMASSREC, False; Expert Model_Name, SETOPTION, IMBALBOUNDS, OFF; Expert Model_Name, SETOPTION, MSMTBOUNDS, OFF; Expert Model_Name, SETOPTION, OBJERRTERMS, ABS; Expert Model_Name, SETOPTION, MAXITERATIONS, 50; Expert Model_Name, IN; Expert Model_Name, RC; Expert Model_Name, RS; Expert Model_Name, SV; Expert Model_Name, PUB; CloseConsole; Example Batch Script to Distribute Random Error using the ALSR: setlogfile "C:\Advisor ALSR Logs\DistRandomError.log"; OpenConsole; Expert Model_Name, SETOPTION, USEMATH, TRUE; Expert Model_Name, SETOPTION, SOLVEMODE, DERROR; Expert Model_Name, SETOPTION, ASRCSOLVER, DMOQP; Expert Model_Name, SETOPTION, RESCNVGTOL, 1e-006; Expert Model_Name, SETOPTION, OBJCNVGTOL, 1e-004; Expert Model_Name, SETOPTION, OBJSCALEFACT, 1; Expert Model_Name, SETOPTION, IMBOBJPREFACT, 0; Expert Model_Name, SETOPTION, FORCEMASSREC, False; Expert Model_Name, SETOPTION, IMBALBOUNDS, ON; Expert Model_Name, SETOPTION, MSMTBOUNDS, ONHARD; Expert Model_Name, SETOPTION, OBJERRTERMS, REL; Expert Model_Name, SETOPTION, MAXITERATIONS, 50 Expert Model_Name, IN; Expert Model_Name, RC; Expert Model_Name, RS; Expert Model_Name, SV; Expert Model_Name, PUB; CloseConsole; ** The Parameter "Model_Name" included in both of the Sample Scripts above is the ODBC DSN connection name for your model. NOTE: MSMTBOUNDS can also be set to "ONRLX" to provide for a more relaxed error distribution in regards to the measurement bounds rather than "ONHARD" (Firm). The recommended standard procedures for running the ALSR (summarized below), as well as more detailed information about the Expert System Preferences can be found in the Online Help File named "aspenreconciliationforms.chm " that comes packaged with the AORA Software installation. Also, if you wanted / needed to have the default ALSR and user generated log files be automatically opened for viewing after an ALSR execution, then per the Batch Script examples above you could also add the following lines of code to the end of the script: BatchExecute C:\Windows\system32\Notepad.exe C:\Program Files\Advisor\Expert\ASRC_Control.log; BatchExecute C:\Windows\system32\Notepad.exe C:\Program Files\Advisor\Expert\ASRC_History.log; BatchExecute C:\Windows\system32\Notepad.exe C:\Program Files\Advisor\Expert\ASRC_Report.log; BatchExecute C:\Windows\system32\Notepad.exe C:\Advisor ALSR Logs\MinimizeImbalances.log; BatchExecute C:\Windows\system32\Notepad.exe C:\Advisor ALSR Logs\DistRandomError.log; Understanding the ALSR Math Engine: A Brief Overview of the Recommended 3-Stage ALSR Execution Procedures: 1.) Build or load the model for reconciliation. 2.) Select Tools|Expert. A new menu bar item labeled Expert appears. 3.) Select Expert|Preferences. The Expert Preferences dialog box appears. 4.) Select Expert|Initialize to simultaneously initialize the Expert system and ALSR. 5.) Configure preferences for the Expert system and select Expert|Reconcile. NOTE: This execution may auto-correct gross errors. 6.) Select Expert|Preferences to open the Expert Preferences dialog box again. 7.) In the Mathematical System section, set the following controls: From the Solver Mode selector, choose Minimize Imbalances. For Imbalance Bounds, choose Off (default). For Measurement Bounds, choose Off (default). 8.) Select Expert|Resolve to run the ALSR engine. 9.) Examine the results to identify any imbalances which could not be driven to zero. 10.) Continue with the next step, or repeat steps 4 through 9 as needed. 11.) Select Expert|Preferences to open the Expert Preferences dialog box again. 12.) In the Mathematical System section, set the following controls: From the Solver Mode selector, choose Distribute Random Error. For Imbalance Bounds, choose On. For Measurement Bounds, choose On Relax. 13.) Select Expert|Resolve to run the ALSR engine. 14.) Examine the results to identify any measurements not within accuracy. NOTE: This is typically an excellent reconciled solution to the problem. However, you may wish to attempt a Formal Distribution of Error with Imbalances in Tolerance. The uncertainty with respect to feasibility may cause longer solution times for large problems only to return an "infeasible problem" return indication. 15.) Select Expert|Preferences to open the Expert Preferences dialog box again. 16.) In the Mathematical System section, set the following controls: From the Solver Mode selector, choose Distribute Random Error. For Imbalance Bounds, choose On. For Measurement Bounds, choose On Firm. 17.) Select Expert|Resolve to run the ALSR engine. NOTE: If a message that the problem did not solve normally is returned, the problem should be considered to be infeasible with respect to all the original bounds and that the relaxation used in steps 11 to 13 was necessary to some degree. 18.) Examine results to identify any measurements not within accuracy. Keywords: ALSR Expert System Preferences Use mathematical system References: None
Problem Statement: This Knowledge Base article answers the following question: What is the meaning of the Configuration Processing statuses in the Aspen Extractor Scheduler?
Solution: There are five Configuration Processing statuses Aspen Extractor Scheduler: 1. INITIAL = when the configuration is added to the processing list. 2. PROCESSING = when one or more tables are being processed. 3. OK = when the last processing cycle has completed successfully. 4. COMPLETE = if an End Time was specified and the processing has progressed up to the end time. 5. ERROR = if an unrecoverable error has occurred. Keywords: None References: None
Problem Statement: When configuring the Aspen Batch and Event Extractor schedule received the following message: The remote server machine does not exist or is unavailable.
Solution: The Aspen Batch Extractor and Aspen Batch Query Tool products use DCOM communications to interact with the external Aspen Production Record Manager (APRM) Database (MSSQL or ORACLE). One needs to make sure that DCOM communication ports are opened in the firewall between the client and the APRM database server. See AspenTech's KB 118957 "DCOM consideration when using a firewall." for a description of how to restrict the range of DCOM communication ports. The programs use Aspen Data Source Architecture (ADSA) to work with Aspen InfoPlus.21 and the Aspen Production Record Manager (Batch.21) server. Keywords: Batch, Event, Extractor References: None
Problem Statement: This Knowledge Base article provides steps to resolve the following error: "Error using SQL: Select * from (table name) where 1 = 0, Connection Failure" which may be encountered when trying to edit any of the tables configured in the Configure Extraction Rules window.
Solution: The above error message may be caused by the BlueCoat network optimizer installed to optimize the network traffic between the Extractor server and the DeltaV database SQL Server located on another machine. To resolve the issue, please remove the relevant segment of the network from the optimizer. Keywords: Blue Coat References: None
Problem Statement: How do I control the conversion of local date times to UTC for Aspen Extractor?
Solution: Aspen Extractor will always send timestamps and date characteristics values in UTC to Aspen Production Record Manager, but these date and times can be in local or UTC time in the source database. The best approach to avoid issues during a DST change is to store the date and time values in UTC in the source database and instruct Aspen Extractor to get them from there. By default, Aspen Extractor assumes the data is in UTC; this is dictated by the Time Fix flag being zero. It can be programmatically set to zero by executing the following SQLstatement: update dbo.ExtractorServers set TimeFix = 0 where SourceDBName = '<MySourceDBName>' When Time Fix is not zero, This means that the date and times in the source are NOT in UTC and will be converted to UTC (although the Time Fix value is not used for such conversion), before being submitted to Aspen Production Record Manager. If working on an existing table, a new column can be created where the date and times can be saved in UTC when the source data gets saved. Aspen Extractor can then be modified to read the date and times from these new columns, versus from those in local time. Keywords: DST UTC Local time Extractor References: None