question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: Does the order of conditions in a Where clause impact performance?
Solution: The order of WHERE condition doesn''t usually matter. However, the order of conditions can make a difference to performance. If the WHERE clause consists of a series of conditions linked by AND, the conditions are evaluated from left to right. Evaluation stops at the first condition that returns FALSE. So it is usually better to put any simple conditions first. E.g. WHERE a>10 AND mylongfunction(a) = 0 is better than: WHERE mylongfunction(a) = 0 AND a>10 SQLplus does separate conditions on fixed area fields from conditions on repeat area fields and it does optimize conditions on name and key timestamp fields. Keywords: sqlplus where References: None
Problem Statement: The CIM-IO User's Manual provides a method of estimating the disk space requirments for the store file generated when Store & Forward is activated (found in Chapter 8. CIM-IO Store & Forward). This tech tip provides the user with an SQLplus query to perform the calculation for all tags in all records defined by IOGETDEF.
Solution: In the SQLplus query provided, downtime is a hardcoded value for the number of hours for which a theoretical store file will be generated. The user can change the desired store time in the calculation by changing the 120 hours constant entered for downtime in the query. The user will need to add up the blocks in the right hand column of the results to get the total size required for the store file. This calculation is based on the current number of tags at their current IO frequencies. Any changes in either the number of tags or their IO frequency will change the calculated size of the store file. The user will need to use their knowledge of any planned or anticipated changes their site/system may have, along with good engineering judgement, to ensure that an appropriate hard disk size is chosen. Keywords: cimio store and forward store file size References: None
Problem Statement: The purpose of this
Solution: is to outline how to solve the problem where data coming into a Setcim or InfoPlus.21 database from a Cim-IO server running VMS is timestamped differently from the current, system time. The symptoms of the problem are: The system time on both the I+.21 server/Cim-IO client (NT) are current, and the timetamp on the Cim-IO server (VMS) is also current. Running the Cim-IO test program (cimio_t_api) on both the Cim-IO server and client returned the current time (using #1: Test Cim-IO time functions). The Get record containing the data records (i.e analog records, discrete records, etc.) have an IO_LAST_UPDATE of the current time (IO_LAST_STATUS can be Success). When manually inputting data into the IP_INPUT_VALUE field of the analog record, it updates the IP_INPUT_TIME with the current time. The only place that the timestamp is not the current time is in the IP_INPUT_TIME field of the analog record when it is updated by Cim-IO, and as a result, the IP_VALUE_TIME and IP_TREND_TIME's (and in the historian).Solution The reason for this problem is that the time server call that VMS systems make return GMT. It is necessary to tell Cim-IO how many hours it must add or subtract to or from GMT to get the local time. The CIMIO_TIMEZONE.DEF file is used for this purpose. The CIMIO_TIMEZONE.DEF file is located on both the Cim-IO server and the Cim-IO client: Cim-IO Server: $CIMIOETC Cim-IO Client: %SETCIMETC% In this file, you make a single entry: a signed integer representing the number of hours west (positive) or east (negative) of GMT, with the appropriate sign. IMPORTANT NOTES: *** This change should be made on the Cim-IO server file. *** For positive time differences (i.e west of GMT), you do not have to enter the + sign: simply enter the hour integer value. Keywords: References: None
Problem Statement: Error : 1114. Failed to Connect to Link IP21: Specified Driver could not be loaded due to system error 1114 当在 Aspen SQLplus 配置 ODBC OAM Connection 的时候,为什么会出现如上错误1114?
Solution: 如果系统变量没有正确的配置,那么当创建新的Oracle database的ODBC OAM Connection 的时候就会出现以下的错误讯息。 为了解决此问题,需要在系统变量 Path 中添加正确的 BIN 文件夹路径。 此问题是 ORACLE 相关的错误,而且已经记录在Oracle的文档中。 Keywords: 1114 system error ODBC SQL CN- References: None
Problem Statement: How can I send an e-mail through Microsoft Exchange 2010 using a query.
Solution: The following query sends an important e-mail through Microsoft Exchange 2010 and attaches the file dbclock.out. -- This query sents a simple important e-mail through Microsoft Exchange 2010 -- and attaches tsk_dbclock.out. local objOutLook; local itemMailOutLook; -- -- Declare objects for Microsoft Outlook Application and e-mail item -- objOutLook=CreateObject('Outlook.Application'); itemMailOutLook=objOutLook.createitem(olMailItem); -- -- Fill in fields for the e-mail recipient and subject line -- itemMailOutLook.to = '[email protected]'; itemMailOutLook.subject = 'Test Message'; -- -- Add the body of the message -- itemMailOutLook.body = 'This is a simple important message'; -- -- Set the importance of the message to olImportanceLow, olImportanceNormal, or olImportanceHigh -- itemMailOutLook.importance = olImportanceHigh; -- -- Attach a file -- itemMailOutLook.attachments.add('C:\ProgramData\AspenTech\InfoPlus.21\db21\group200\tsk_dbclock.out'); -- -- Send the message -- itemMailOutLook.Send; Before running the query, you must include the com object for Microsoft Office by selecting View-> Keywords: e-mail Microsoft Office Microsoft Outlook References: s from the Aspen SQLplus tool bar and including the Microsoft Office Office Library. Also, Microsoft Exchange 2010 must be installed on your Aspen InfoPlus.21 server.
Problem Statement: 在Aspen SQLPlus运行SQL脚本时出现如下错误
Solution: 此问题是因为在算术计算中没有说明数据类型而使用本地变量。 如果本地变量没有说明数据类型,那么默认为Variant本地变量。 在SQL脚本中可以用下列语句定义变量的数据类型。 LOCAL variable_name <data_type> Keywords: variant arithmetic failed variant CN- References: None
Problem Statement: How to display in Aspen SQLplus times independent of daylight savings time adjustments.
Solution: Remove the daylight adjustment by using the function 'time_offset'. I.e. write CURRENT_TIMESTAMP-time_offset Example: write CURRENT_TIMESTAMP-01:00:00.0 Keywords: daylight, time, offset, timestamp References: None
Problem Statement: How to select the 'ERROR_TYPE' field of QueryDef or CompQueryDef records using Aspen SQLplus?
Solution: ERROR_TYPE is an SQLplus function, so it must be encased in double quotation marks. I.e. Select Name,ERROR_TYPE from QueryDef. Keywords: error_type References: None
Problem Statement: While attempting to connect to a Microsoft Access database link using Aspen SQLplus, you may receive the error message: Failed to connect to link (hostname): Microsoft ODBC Access Driver The Microsoft Jet Database engine cannot open the file 'unknown'. It is already opened exclusively by another user, or you need permission to view its data.
Solution: Make sure the account used to start your Aspen InfoPlus.21 Task Service has a valid network share with proper permissions to your remote access MDB file, usually located on another server. The full path to your access file is viewable thru the ODBCAD32 tool under User or System DSN tab. Keywords: Access MDB SQL+ ODBC Jet Database Engine References: None
Problem Statement: Error : 1114. Failed to Connect to Link IP21: Specified Driver could not be loaded due to system error 1114 Aspen SQplus에서 ODBC OAM Connection 을 설정할때 왜서 위와 같은 1114 오류가 발생합니까?
Solution: Aspen SQLplus에서 Oracle database 의 ODBC OAM Connection 을 새로 생성할때 시스템 변수Path를 정확하게 설정하지 않으면 이런 오류 발생. 문제를 해결하기 위해서는 시스템 변수 Path에 정확한 BIN 폴더 정보를 추가해야 함. 본 오류는 ORACLE 과 관한 문제이고 이미 Oracle 문서에 기록되였습니다. Keywords: 1114 system error ODBC SQL KR- References: None
Problem Statement: The BREAK keyword is used in a SELECT statement to separate sections of the output when the value for a column changes. The sections can be separated by a whole page (PAGE), a single line (SKIP 1), or multiple lines (SKIP n). The PAGE command can also be used by itself to advance to a new page. When using PAGE after the BREAK or stand alone, users often find that it doesn't truly print as you would expect. Several pages end up on 1 or 2 pages (depending upon the amount of data per page). How, then, do you get the query to recognize the page break?
Solution: There is a SET function called PAGE_LENGTH that must be set before producing the report. By default, that PAGE_LENGTH is set to 0 which indicates no automatic paging. Depending upon your printer, you will need to set this to some size (number of lines/page). For example, SET PAGE_LENGTH = 66; says there are 66 lines per page. This number can be modified until you get satifactory results with your report. Keywords: page page_length report References: None
Problem Statement: This query will search a specified path and convert Aspen Process Explorer .atgraphic files to XML files. The XML files can then be imported into the Aspen Web.21 Graphic Studio.
Solution: local mycommand char(150); set log_rows = 0; macro axp_path = 'C:\Progra~1\AspenTech\Workin~1\APEx\Graphics\'; mycommand = 'dir/B/A-D ' || '&axp_path'; -- reads all data files and process them one by one for (select line myfile from (system mycommand) where line like '%.atgraphic') do mycommand = 'C:\Progra~1\AspenTech\APEx\GE\GraphicsEditor.exe -writeXML ' || '&axp_path' || myfile; system mycommand; end Keywords: None References: None
Problem Statement: A custom application developed using the Aspen InfoPlus.21 API is not working as intended when you run it in an Aspen SQLplus query using the SYSTEM command. The application works when executed in a command prompt window.
Solution: If your application requires any sort of configuration files to be executed properly, these files have to be copied to the Group200 folder. The path can be obtained by running SYSTEM 'CD' in the Aspen SQLplus Query Writer. Keywords: GROUP200 SQL+ Database API References: None
Problem Statement: How do I make ODBC return more rows after increasing the Maximum Rows parameter in the Aspen SQLplus Advanced Setup screen?
Solution: After changing any parameter on the SQLplus Advanced Setup screen, you should stop Aspen InfoPlus.21, restart the Aspen InfoPlus.21 Task Service, and then start Aspen InfoPlus.21 again.A Alternatively, you could simply reboot server after stopping InfoPlus.21. Keywords: ODBC options Timeout Table list record References: None
Problem Statement: The purpose of this
Solution: is to outline how to shutdown the InfoPlus.21 database when there are remote sessions of SQLplus running. When a remote session of SQLplus is running and a user attempts to shut down the database, the following will happen: Solution 1. In the Infomation/Prompts area of the InfoPlus.21 Manager (very bottom of GUI window), you will see the message: TSK_SQL_SERVER shutdown successfully. 2. You will immediately get an error dialog box titled InfoPlus.21 Manager that says, Could not shutdown database. There may be an InfoPlus.21 program like DBMT, Engcon, or SQLplus running. 3. In the Infomation/Prompts area of the InfoPlus.21 Manager, you will see the message: InfoPlus.21 could NOT be stopped. If the user checks the InfoPlus.21 server and is certain that all database tools have been shutdown, it could be that there is a remote SQLplus session running. To verify this, open an Windows Task Manager and go to the Processes tab. There will be a process named jsn_tcp_server. Kill that process: 1. In the Windows Task Manager, select the process jsn_tcp_server by clicking once on it 2. Click the End Process button You should now be able to successfully start the database back up. NOTES: If you try to start the database BEFORE killing the task jsn_tcp_server, then in the Messages/Prompts area of the InfoPlus.21 Manager, you will see the messages: o Waiting for all external tasks to shutdown... o Shutting down <task_name> and so forth until it goes through the complete shutdown routine. At the end of this routine, you will again see the InfoPlus.21 Manager dialog box. If you attempt to connect to the InfoPlus.21 database with the remote SQLplus session while the database is shutdown, you will get a dialog box titled SQLplus - Network Error, that says, Network Failure or old SQLplus server. Keywords: Network Error Network Failur or Old SQLplus server jsn_tcp_server References: None
Problem Statement: The ROUND function in Aspen SQLplus sometimes yields unexpected results when working with numbers having extended precision. For example, if ROUND(1.005954, 4) returns 1.006, why does ROUND(1.00595, 4) calculate 1.0059 instead of 1.006?
Solution: The reason is because computers do not store real numbers exactly. For example, internally 1.00595 is stored as 1.0059499999999999, which rounds down to 1.0059. To work around this anomaly, before rounding, try adding a small number to the number you are trying to round. For example, ROUND(1.00595 + .000001, 4) = ROUND(1.005951,4) returns 1.006. Keywords: ROUND Roundoff Error Precision References: None
Problem Statement: How can you specify which TCP/IP port to use when creating an Aspen SQLplus ODBC data source?
Solution: By default, the TCP/IP port number used when configuring an ODBC link to Aspen SQLplus is the one specified in the ADSA Aspen SQLplus service component: To use a different port, select a data link to Aspen SQLplus from the ODBC Data Source Administrator: The following screen appears after pressing the Advanced button: Uncheck the box Use Aspen Data Sources (ADSA) and press OK. This causes the following screen to appear: You can now enter an alternate port in the field TCP/IP Port. Keywords: TCP/IP port ODBC References: None
Problem Statement: Why is the left hand navigation bar not visible for Aspen IP.21 Process Browser or Aspen SQLplus Web-Based Reporting?
Solution: The left-hand navigation pane for Aspen IP.21 Process Browser or Aspen SQLplus Web-Based Reporting can be hidden when Internet Explorer Enhanced Security Configuration is enabled. This problem can be resolved by disabling this setting. Keywords: IE ESC navigation pane References: None
Problem Statement: Why am I getting no rows selected from AGGREGATES table query?
Solution: There are two primary reasons that a query against the AGGREGATES table may return no data. There is no tag data for the requested timespan. The range of timestamps is less than the aggregate period. For example, if the aggregate period is set as 12 hours, the difference between starting and ending timestamps must be at least 12 hours. The following code will not return data for Tag1. Note that the range of timestamps is only 10 hours. However, the period is 12 hours. SELECT name, min, max, avg FROM aggregates WHERE name like 'Tag1' AND ts between '01-JAN-05 10:00' AND '01-JAN-05 20:00' AND period = 12:00 The following code will return values for minimum, maximum, and average--assuming that the tag actually has data in the requested range of timestamps. SELECT name, min, max, avg FROM aggregates WHERE name like 'Tag1' AND ts between '01-JAN-05 10:00' AND '01-JAN-05 22:00' AND period = 12:00 Keywords: period rows aggregate References: None
Problem Statement: When a Stored procedure is called which accesses a remote link which is currently unavailable, the calling query fails with an error which can not be caught with a BEGIN-EXECPTION-END block.
Solution: The only method of catching this type remote query error's is to start the query which contains the stored procedure as a sub-query and place the BEGIN-EXECPTION-END block around the sub-query call. BEGIN START Record'SPQuery'; EXCEPTION ERROR ERROR_CODE, ERROR_TEXT||ERROR_LINE; END Alternatively, if the stored procedure does not return any values, then convert it to a query record and execute it using the start command. Keywords: References: None
Problem Statement: Customers would like to select evenly spaced historical data from InfoPlus.21 using SQLplus. How can this be accomplished?
Solution: Within SQLplus, there is a pseudo table named the HISTORY table. Because the boxcar data compression algorithm is used to reduce the amount of storage needed for InfoPlus.21 history, data is not stored at regular time intervals (even if it is being scanned at regular intervals). The HISTORY table can present InfoPlus.21 history data as if it was recorded at regular intervals. The HISTORY table contains the following fields: NAME FIELD_ID TS PERIOD REQUEST VALUE STATUS Data can be selected from the SQLplus HISTORY table using a request type of 1 (this will request evenly spaced data). The following query is provided for example purposes and will return evenly spaced historical data: select NAME, TS, VALUE from history where NAME='record1' and TS between '17-Jun-09 14:00:09' and '18-Jun-09 14:00:00' and REQUEST=1; Note-1 that there are in fact SIX other options for the REQUEST setting, depending on exactly how you would like the data used and displayed - as described in the Aspen SQLplus online help files. Defaults are: REQUEST = 1 PERIOD = 1 minute TS range = 1 hour back from current time Note-2 It is very important to select boundaries for the TS parameter that allow the PERIOD parameter to be properly aligned. Otherwise, the returned results can be difficult to predict. For example: select VALUE, TS from HISTORY where NAME = 'ATCAI' and TS between '22-Jun-09 09:00' and '22-Jun-09 12:11' and PERIOD = 00:10:00; This returns: VALUE TS 5.70714 22-Jun-09 09:00:00.0 10.5879 22-Jun-09 09:10:03.1 10.0189 22-Jun-09 09:20:06.3 5.65094 22-Jun-09 09:30:09.4 10.0433 22-Jun-09 09:40:12.6 7.49817 22-Jun-09 09:50:15.7 1.72653 22-Jun-09 10:00:18.9 11.0976 22-Jun-09 10:10:22.1 7.12341 22-Jun-09 10:20:25.2 9.01706 22-Jun-09 10:30:28.4 3.26711 22-Jun-09 10:40:31.5 2.42964 22-Jun-09 10:50:34.7 7.97454 22-Jun-09 11:00:37.8 6.96392 22-Jun-09 11:10:41.0 9.2802 22-Jun-09 11:20:44.2 8.322 22-Jun-09 11:30:47.3 3.57672 22-Jun-09 11:40:50.5 3.70212 22-Jun-09 11:50:53.6 1.98888 22-Jun-09 12:00:56.8 For more information on the fields of the HISTORY table and their uses, please refer to the Aspen SQLplus Help files. Keywords: HISTORY References: None
Problem Statement: How can I read comma-delimited files using SQLplus?
Solution: By selecting from a file and using the substring function to split the columns. For example: SELECT substring (1 of line between ',') as name, substring (2 of line between ',') as value FROM 'comma.txt' Keywords: References: None
Problem Statement: How does SQLPlus read values from InfoPlus.21 (IP.21) history? Are there any differences between 6.0.1 and 2004.2? Scenario #1: When the IP_#_OF_TREND_VALUES = 1, does SQLPlus grab the 1st most recent value from memory and the second one from disk? Scenario #2: When the IP_#_OF_TREND_VALUES = 5, does SQLPlus grab the 1st and the 2nd most recent value from memory? or... Scenario #3: Regardless of what the IP_#_OF_TREND_VALUES is set to, does SQLPlus always grab the most recent values from disk?
Solution: Generally speaking, in all the scenarios listed above, SQLplus simply calls an IP.21 api function when reading history. Depending on the version and the query, SQLplus will call the native IP.21 history routine, RHIS21DATA(), or some of the Setcim history routines (FINDHISX, RHISDATAX, FINDHISUSTS, RHIDATAUSTS). The history read routines may decide to fetch some relevant data from the memory-resident history repeat area while fetching the remainder from disk. In some cases, some history read routines will only include archived data in the returned data set. For example, the history read routines might exclude very recent data that has been queued, but not archived, if doing so would introduce a gap into the resulting history data set. To be more precise, for a query of a history repeat area field from a record, (Eg: SELECT IP_TREND_VALUE FROM atcai), the InfoPlus.21 API uses RHISDATAUSTS. For a query of the HISTORY table, it uses RHIS21DATA. You can also read specific occurrences from the memory repeat area. For example: SELECT IP_TREND_VALUE[1], IP_TREND_VALUE[2] FROM atcai This calls RDBVALS. There is no difference in the calls that SQLplus makes for these queries between 6.0.1 and 2004.2. Finally, regardless of what IP_#_OF_TREND_VALUES is set to, SQLplus always returns the most recent values from disk. Note, however, that if you are using compression and are getting values from IP_TREND_VALUE, this may not be the most recent value received. Only the values that break compression are sent to the archive file. Additional information on this topic is provided in the Release Notes for the latest patch for IP.21 Server. For 6.0.1 it isSolution 118398. Below are the excerpts from thatSolution When called with H21_GET_ACTUALS mode, the API function RHIS21DATA() should only return archived data if the system-wide environment variable H21NOREADRPA is set to any value (ie., 1). When called with some other mode, or if H21NOREADRPA is undefined, then RHIS21DATA() might also return the current value field, if any, from the fixed area plus additional, as yet unarchived, data that are in the memory-resident history repeat area. Similarly, if archiving is enabled and H21NOREADRPA is defined RHISDATAUSTS() should only return data that has been historized by the archiver. If H21NOREADRPA is not defined, then RHISDATAUSTS() may also return additional data from the memory-resident history repeat area that has not been historized by the archiver yet. Some users may want to set the H21NOREADRPA environment variable in order to avoid a generally minor problem where recent history could sometimes seem to disappear temporarily. This can seem to happen if a history occurrence shifts out of the memory-resident history repeat area before it has been dequeued by the archiver, a situation that is more likely for a tag having a small history repeat area that is being written and read multiple times in rapid succession. Most users will probably not want to set the H21NOREADRPA environment variable since reading history from memory is much faster than reading it from disk. Furthermore, some users have developed SQLplus queries based on the assumption that these functions can read from the memory-resident history repeat area. Keywords: References: None
Problem Statement: How do I determine database size information using an Aspen SQLplus query?
Solution: The function DATABASE_SIZE returns useful sizing information about an Aspen InfoPlus.21 database. The following table lists the parameters DATABASE_SIZE accepts: Parameter Description TOTAL_WORDS Total number of words MAX_WORDS Maximum number of words HEADER_WORDS Number of words in the database header LOCATOR_WORDS Number of words in the locator table FREE_WORDS Number of free words ALPHA_WORDS Number of words in the alphabetization table HIGHEST_RECORD Highest used record ID TOTAL_RECORDS Total number of records MAX_RECORDS Maximum number of records USABLE_RECORDS Number of usable records UNUSABLE_RECORDS Number of unusable records For example, the query write database_size('total_words'); returns the database size in words. Keywords: References: None
Problem Statement: Aspen SQLPlus에서 스크립트를 실행할때 아래와 같은 오류 발생
Solution: 본 오류는 로컬 변수를 표명할때 산술계산의 데이터 종류를 지정하지 않았기 때문입니다. 만약 로컬 변수를 표명할때 데이터 종류를 지정하지 않으면 디폴트로 Variant 로컬 변수로 인식합니다. SQL 스크립트에서 변수를 표명할때 아래와 같이 데이터 종류를 지정할수 있습니다. LOCAL variable_name <data_type> Keywords: variant arithmetic failed variant KR- References: None
Problem Statement: Dividing two numbers in Aspen SQLPlus may produce unexpected results. For example: write trunc(1/3, 2); returns 0 instead of 0.33.
Solution: The result of the division depends if you are using integer or real arithmetic. If both the numerator and denominator are integer, then Aspen SQLPlus uses integer arithmetic and ignores the remainder. If either the numerator or denominator is a real number, then Aspen SQLPlus uses real arithmetic and includes the remainder. In the example trunc(1/3, 2), Aspen SQLPlus first divides one by three using integer arithmetic and ignores the remainder for a result of 0. In the second example, we have declared x, y, and z as real (double) numbers. This forces Aspen SQLPlus to use real arithmetic and include the remainder. local x,y,z double; x= 1; y= 3; z=x/y; write trunc(z,2); Result: 0.33 In a third example 1.0/3 or 1/3.0 causes Aspen SQLPlus to use real arithmetic because either the numerator or denominator is real. write 1.0/3; write 1/3.0; Result: 0.333333 0.333333 Keywords: Division Remainder Decimal SQLPlus References: None
Problem Statement: This knowledge base article discusses the precedence of security object models used by client applications and by InfoPlus.21. This article answers the question of whether or not users are able to write to the InfoPlus.21 database if: The users are granted the 'Write' privilege for SQLplus in the AFW Security Manager and The same users are in a role in which InfoPlus.21-level write access is specifically denied
Solution: The InfoPlus.21 API functions check the security level of each user before a write to InfoPlus.21 is executed. Also, when executing a write to InfoPlus.21, the SQLplus Query Writer calls the InfoPlus.21 API to communicate with the database. Therefore, a user may have SQLplus Query Writer privilege to execute writes though the SQLplus Query Writer, but the data will not be written to InfoPlus.21 if the user is a member of a role which has write-restricted access within InfoPlus.21 security. The same concept applies if the user has an SQLplus client (2.5.1 or earlier) which pre-dates AspenTech's security model. Any v.2.5.1 or earlier client which connects to InfoPlus.21 (v.3 or later) connects through the IP21GUESTUSER account. The security privileges for users which access InfoPlus.21 through these older client tools must be granted to the IP21GUESTUSER account. Keywords: write database sql References: None
Problem Statement: After setting up a new database link, the queries written using that link sort timestamps alphabetically instead of chronologically. Cause This is because when configuring the Aspen SQLplus connection in the ODBC Data Source Administrator, the default setting in the SQLplus Advanced Setup automatically marks the option Timestamp sent as Character.
Solution: To have queries interpret timestamps as chronological events, all that is needed is to uncheck the option Timestamp sent as Character and restart Aspen SQLplus. To do this, Open the ODBC Data Source Administrator. Select the tab for System DSN Select the database link that was previously made and select Configure... This will open up the SQLplus Setup dialog box. Press the Advanced button. This will open up the SQLplus Advanced Setup dialog box. Unselect the third checkbox, Timestamp sent as Character. Press OK to exit the SQLplus Advanced Setup dialog box. Press OK to exit the SQLplus Setup dialog box. Press OK to exit the ODBC Data Source Administrator. Open the Aspen InfoPlus.21 Manager. In the Running Tasks box, select TSK_SQL_SERVER and press STOP TASK. In the Defined Tasks box, select TSK_SQL_SERVER and press RUN TASK to restart the task. Now reconnect to the database in Aspen SQLplus. All queries using the database link should now sort timestamps chronologically. Keywords: database link timestamps sort incorrectly References: None
Problem Statement: This knowledge base article demonstrates how to determine the duration of time that an Aspen InfoPlus.21 tag has a certain value.
Solution: The following script will determine the duration of time that a record is at a certain value. The script takes 1 parameter, a record name. This script works with both uncompressed and compressed data and uses time arithmetic to determine the duration. NOTE: This script looks at all history for a record. It can easily be modified to look at a specific range of history. This Aspen SQLplus script TestIt is called by another Aspen SQLplus script that passes the name of the record to examine. An example is shown below: start record 'testit', 'atcai'; >>>>> Script begins <<<<< -- TestIt.sql -- AspenTech SQLplus routine to find the amount of time that a value was set above a given threshold. -- The record name is represented by &1. This routine will work for compressed or uncompressed data. -- TestIt.sql should be saved as a QueryDef or CompQueryDef record. -- Assumptions: This is for a device such as a valve or motor that is simply closed/opened or off/on. -- These routines should be easily changed to check for other values. -- -- Set up needed variables local starttime timestamp; -- When the value went to threshold local endtime timestamp; -- When the value is below threshold local elapsetime real; starttime = null; -- The elapsed time -- -- Check for input parameters -- if ( ('&1' = '') ) then error 'No Record Name', 'Supply a valid record name'; end; -- Get the information from the InfoPlus.21 Historian and sort by time. -- Determine when the transitions took place and calculate the elasped time. for (select name, ip_trend_value, ip_trend_time from &1 order by ip_trend_time) do -- Check for a valid transition if ( ip_trend_value >= 1) then if (starttime is null) then starttime = ip_trend_time; end; -- The value is less than the threshold. Make sure that there is a valid -- starttime and calculate the difference between times. -- delta_time () returns the tenths (1/10) of seconds. Therefore we must -- adjust. Also, the difference is given as a negative so take the -- absolute value. else if ( starttime is not null ) then endtime = ip_trend_time; elapsetime = abs(delta_time(starttime, endtime)/10); write elapsetime || ' ' || starttime || ' ' || endtime; starttime = null; endtime = null; end; end; end; Keywords: SQLplus threshhold References: None
Problem Statement: Can I configure a timeout for an ODBC query?
Solution: Yes, this is one of the settings of the SQLplus/ODBC driver. You can customize it, by going to the Control Panel--> ODBC and select configure -> advanced for the SQLplus driver. Then fill in a setting for the timeout. Keywords: Desktop ODBC ODBC.21 SQLplus SQL+ References: None
Problem Statement: Meaning of '1 row inserted' and '1 row updated' in regards to updating a tag's history information using Aspen SQLplus.
Solution: When executing a Aspen SQLplus query to insert new/update old history occurrences, the user is presented one of the following messages: 1 row inserted. or 1 row updated. Although it may appear that the query executed correctly, there is still a chance that the data was not actually inserted or updated for the given tag. This could be the case if you are trying to insert or update history that is timestamped before the oldest fileset for the appropriate repository. FOR EXAMPLE: You have a repository that has the oldest fileset's 'Start Date' = '10-JAN-09 06:00:00.0' and the following query is executed: INSERT into atcai (ip_trend_time, ip_trend_value, ip_trend_qstatus) values ('09-JAN-09 08:45:01.5', 12345.6, 'GOOD'); The Aspen SQLplus Query Writer will return to the output area: 1 row inserted. Yet, if immediately following the insert statement a SELECT of this same data is executed it will not find the data. The reason this occurs is because the h21archive process that is managing this repository could not find a place to put the data. The role of SQLplus, in this situation, is to verify that the XOLDESTOK date (born on date) for the tag is before the data that you are trying to insert and that the PAST parameter on the repository allows for history data to be inserted into the past and for at least the amount of time that would include the timestamp that was designated in the query. Once this verification has been done and there are no errors then the data is passed to the Aspen InfoPlus.21 server into the processing queue for the appropriate repository which will be managed by it's h21archive process. At this point the h21archive process does not know where the data came from so it just discards it. Keywords: insert history XOLDESTOK References: None
Problem Statement: How to display tagnames greater than 24 characters with an SQLPlus query?
Solution: For columns longer than the default you can use the WIDTH function. For more details on how the WIDTH function is used please read the online help for Aspen SQLplus. Example: Select Name WIDTH 64 from IP_Analogdef where name like 'atc%'; Keywords: References: None
Problem Statement: If the iqtask.exe is experiencing a memory leak there may be a problem with the queries. Specifically, if you have any queries that are cached, that is, the PROTECTED field in the fixed area of your QueryDef record is set to CACHED, and your query contains a macro, this may very well be the problem.
Solution: Unlike other SQLplus queries, which translate each statement and then execute it, a cached query will translate the whole query and store it in memory before executing it. Because a cached query has already been translated, it cannot be changed once it's been cached. This means that macros may not be re-defined and their use should be restricted to constant values only. Therefore, if the cached query contains macros that do more than deal with constant values, remove the macros and determine another way to accomplish the same functionality. Keywords: cached macro References: None
Problem Statement: This knowledge base article demonstrates how to write a query which retrieves statistical data for a tag whenever a piece of equipment is on.
Solution: The macro EquipmentTagName contains the name of a tag defined against IP_DiscreteDef. This holds the running status (either Off or On) for a piece of equipment. The query assumes 0 means the equipment is off and 1 means the equipment is on. The macro InstrumentTagName has the name of a tag defined against IP_AnalogDef. The query works as follows: 1. The query prompts for starting and ending times to search history. 2. The query finds the times the equipment changed states from OFF to ON and from ON to OFF and stores the transition times in a temporary table. 3. The query finds the maximum, minimum, and average values of the instrument while the equipment was on and stores that information into another temporary table. 4. The query displays a report showing the times the piece of equipment transitioned from OFF to ON and from ON to OFF, the length of time the equipment was ON, and the maximum, minimum, and average values for the instrument during each time period. The output of the query looks like this: Start Time End Time Duration Maximum Minimum Average --------------------- --------------------- ------------ ------- ------- ------- 13-SEP-11 15:54:30.0 13-SEP-11 15:55:00.0 +000:00:30.0 10.95 0.52 6.12 13-SEP-11 15:55:30.0 13-SEP-11 15:56:00.0 +000:00:30.0 11.10 0.62 5.81 13-SEP-11 15:56:30.0 13-SEP-11 15:57:00.0 +000:00:30.0 11.20 0.72 5.50 14-SEP-11 09:10:00.0 14-SEP-11 09:35:00.0 +000:25:00.0 13.08 0.05 6.69 14-SEP-11 10:10:00.0 14-SEP-11 10:35:30.0 +000:25:30.0 13.06 0.03 6.55 14-SEP-11 11:10:00.0 14-SEP-11 11:35:00.0 +000:25:00.0 13.04 0.01 6.51 14-SEP-11 12:13:30.0 14-SEP-11 12:35:30.0 +000:22:00.0 13.08 0.04 6.60 14-SEP-11 13:10:00.0 14-SEP-11 13:35:30.0 +000:25:30.0 13.06 0.02 6.51 14-SEP-11 14:10:00.1 14-SEP-11 14:35:30.0 +000:25:29.9 13.08 0.05 6.56 14-SEP-11 15:10:00.0 14-SEP-11 15:35:30.0 +000:25:30.0 13.06 0.03 6.53 15-SEP-11 07:10:00.0 15-SEP-11 07:35:00.0 +000:25:00.0 13.05 0.02 6.55 Keywords: aggregates statistical information statistics maximum minimum average sample query Gaps and Islands References: None
Problem Statement: How do I change the increment value of a FOR loop index?
Solution: Aspen SQLplus only increments FOR loop indexes by 1. If you need to specify a different step, then define a second variable and change the second variable within the loop as in the following example: local ndx integer; local ndx1 integer; for ndx = 1 to 5 do      if (ndx > 1) then            set column_headers=0;      end      ndx1 = 2*ndx;      select occnum, ip_trend_time, ip_trend_value from atcai            where occnum = ndx1; end This query produces output similar to:    OCCNUM ip_trend_time       ip_trend_value ---------- -------------------- --------------         2 30-JUL-13 14:06:35.3          0.31         4 30-JUL-13 14:06:25.3         11.60         6 30-JUL-13 14:06:15.3          0.62         8 30-JUL-13 14:06:05.3          0.48        10 30-JUL-13 14:05:55.3          3.71 Keywords: for, loop, argument, index, increment References: None
Problem Statement: The following examples shows how to insert records into the Aspen InfoPlus.21 database using Aspen SQLplus and a delimited text file.
Solution: This example inserts 5 IP_AnalogDef records into the database. The name, description, repository, and number of repeat area occurences are set from the datafile. Example: Insert records into the InfoPlus.21 database using a delimited data file. -- The substring () function has a number of uses and is documented in the help files. INSERT INTO ip_analogdef (name, ip_description, ip_repository, ip_#_of_trend_values) SELECT SUBSTRING (1 OF LINE BETWEEN '|'), SUBSTRING (2 OF LINE BETWEEN '|'), SUBSTRING (3 OF LINE BETWEEN '|'), SUBSTRING (4 OF LINE BETWEEN '|') FROM '\temp\insert.dat'; <insert.dat> test1|test for test1|TSK_DHIS|2 test2|test for test2|TSK_DHIS|2 test3|test for test3|TSK_DHIS|2 test4|test for test4|TSK_DHIS|2 test5|test for test5|TSK_DHIS|2 Keywords: INSERT INTO References: None
Problem Statement: This Knowledge Base article shows how to calculate daily averages for an analog variable using Aspen SQLplus.
Solution: To get daily averages for an analog tag called ATCAI run this query: SELECT TS, AVG FROM AGGREGATES WHERE NAME = 'ATCAI' AND PERIOD = 24:00:00 AND TS BETWEEN '01-JAN-13' AND '01-FEB-13'; Keywords: References: None
Problem Statement: How to configure the list of printers for automated Aspen SQLplus reports?
Solution: The list of printers to be used in Automated (or scheduled) mode is based on the user account of the task server running on the Aspen InfoPlus.21 system (Aspen InfoPlus.21 Task Service). Note that Web.21 can be on a complete different machine yet the client browser can be on another one. The Automated reports are stored in the Aspen InfoPlus.21 database as SQLReportDef records. Hence, when Aspen Sqlplus Reporting queries the available list of printers to be used, the query is sent to the SQLplus external task (TSK_SQL_SERVER) on which the Aspen InfoPlus.21 data source has been selected. Since TSK_SQL_SERVER is launched by the task service, the user account of task service will be used to look up the printer list. So in order to use a particular network printer, the user has to add the printer to the Aspen InfoPlus.21 using the task service's user account. However, if the user simply Run the report and display it on his browser, he has access to any printers he defines because he is now using the Internet Explorer print function directly like he normally does. Keywords: reporting printers References: None
Problem Statement: The following script will count the number of times that a valve was opened or a motor was turned on for a record. It does not matter if the data in history is compressed or uncompressed. Note: This script will search all of history the way that it is written. It can easily be modified to search a specific time range.
Solution: The script takes 1 variable, &1, that is the name of the record to check. You would call the script from another script as in: start record 'countit', 'test1'; >>>>> Script begins <<<<< -- -- SQLplus routine to find the number of times that something -- turned on or was opened. -- The record name is represented by &1. -- This routine will work for compressed or uncompressed data. -- -- Assumptions: -- This is for a device such as a valve or motor that is -- simply closed/opened or off/on. -- -- These routines should be easily changed to check for -- other values -- -- -- Set up needed variables -- local flag integer; -- A flag local total_times flag = 0; total_times = 0; integer; -- How many times -- -- Check for input parameters -- if ( ('&1' = '') ) then error 'No Record Name', 'Supply a valid record name'; end; -- -- Get the information from the InfoPlus.21 Historian -- and sort by time. Calculate the number of times a transition -- to 1 took place. -- for (select name, ip_trend_value, ip_trend_time from &1 order by ip_trend_time) do -- -- Check for a valid transition to 1 -- if ( ip_trend_value = 1 ) then if ( flag = 0 ) then flag = 1; total_times = total_times + 1; end; -- -- If the value is not 1 then it must be 0. -- Reset the flag. -- else if (flag = 1) then flag = 0; end; end; end; if ( total_times > 0 ) then write 'Total times = ' || total_times; end; Keywords: None References: None
Problem Statement: Sometimes it is necessary to convert a number from INTEGER to BIT and from BIT to INTEGER. This seems like a basic and obvious function of the CAST statement. However, currently there is no direct way to do so in Aspen SQLplus. The indirect way is described below.
Solution: You can convert from integer to character and then to bit. You can convert from bit to character and then to integer. These conversions give better control about how many digits should be used. NOTE: The example shown below is based on the assumption that you have created a QueryDef record called a1query, made it usable, and set its #QUERY_LINES and its #OUTPUT_LINES to at least 1. (The conversion will yield a hexadecimal number.) The following example converts an INTEGER, 12345, to BIT: a1query.query_line[1] = cast(12345 as char using 'uz 4'); The following example converts from BIT to INTEGER: write cast(substring(acompquery.query_line[1] from 1 for 4) as int using 'uz 4'); If you would prefer to see a binary display rather than hex (which is used for the BIT type in SQLPlus), you should create a record defined by IntegerFormatDef with DISPLAY_RADIX_CODE set to -2 and DISPLAY_LENGTH set up to 32. Then you just have to call CAST function using defined record. Keywords: References: None
Problem Statement: When opening the Tag Search window from SQLplus Reporting you get the warning Server: <not available> Despite the message it is still possible to search, but when trying to add the tag to the report you get the message Connect Failed. This problem occurs when using Windows authentication (as set in IIS) for SQLplus Reporting in a domain environment.
Solution: For Windows authentication to work, the AspenAppPool in IIS must run as a user with permissions to lookup users' domain group memberships. If running as LocalSystem you need to change this to an authorized domain id. You do this by opening the IIS Manager and expanding the tree structure. Select AspenAppPool and then Advanced Settings… to open the Advanced Settings window. In the Advanced Settings window scroll down and select Identity, then press the button at the end of the line. In the Application Pool Identity window select custom account and press Set… to add an authorized domain account. Press OK to the windows to get back to the IIS Application Pools and then press Recycle for the AspenAppPool. Once you have done this, reopen Internet Explorer and try a tag search on the SQLplus Reporting web page to confirm this has resolved the problem. Keywords: AspenAppPool server not available tag search References: None
Problem Statement: Best fit data can be queried from the Aspen InfoPlus.21 database using Aspen SQLplus by writing a query to select information from the built-in history table with a script similar to the following: select ts, value from history where name like 'atcai' and ts between '04-MAR-16 01:00:00' and '04-MAR-16 02:00:00' and request = 3 and period = 1000 The variables “ts” and “value” will return the time and value of the tag at the specified time, while setting the variable “request” equal to 3 informs Aspen SQLplus to return data with the best fit algorithm. This
Solution: discusses the impact of selecting a value for the “period” parameter.Solution By definition, best fit data from a given time span, or bucket, will return the first, last, largest, smallest, and first bad values within that time span. Based off the data in your database, each span can return anything between 2 and 5 data points. In Aspen SQLplus, the period parameter is a measurement in tenths of seconds used to approximate the average time span between two data points. First, we will define the following variables that Aspen SQLplus will use to calculate the number of buckets needed to fill the timespan as follows: ts = Timespan P = Period MH = Maximum history points HP = History points allocated to buckets B = Number of buckets Aspen SQLplus will first calculate the number of buckets and time span encompassed by each bucket before determining the correct data to return for that bucket. To do this, the first two parameters that are needed are the period and timespan as specified in the query. The maximum number of returned history points is determined by dividing the timespan by the average time span between each data point (MH = ts/P). Two points are then allocated for the first data point and the last data point to find the number of points that will be allocated to the various buckets of data (HP = MH – 2). Since each bucket can hold a maximum of 5 data points, the number of history points is divided by 5 and rounded down to determine the number of buckets (B = HP/5 rounded down). Once the number of buckets are calculated, the buckets are evenly distributed within the given timespan and the first, last, maximum, minimum, and first bad data points are returned from that bucket and output in the query writer. Suppose a statement as follows with a tag named “ATCAI” that updates with a random value between 0 and 13.1 once every five seconds: select ts, value from history where name like 'atcai' and ts between '04-MAR-16 01:00:00' and '04-MAR-16 02:00:00' and request = 3 and period = 1000 The one hour timespan request equates to 3600 seconds. Convert the period, which is in tenths of a second (1000), to seconds gives us 100. Now divide 3600 by 100, this gives us 36 (MH). This is then used as the maximum number of history data that is returned by the code. This would then leave 34 (HP) maximum history data that are best fit data. Best fit data is Min, Max, First, Last and the first Bad value within that group of best fit data, so we divide 34 by 5 = 6 (rounded down). This means the code would return 6 buckets of best fit data. Note that since the history data within the timespan requested does not have any sample with a Bad status, the code returned 4 history data for each bucket. ---- 04-MAR-16 01:00:00.0 9.18778 <== 1st sample of the timespan 04-MAR-16 01:00:00.3 9.72695 <== bucket 1 04-MAR-16 01:03:25.3 0.163673 04-MAR-16 01:04:55.3 13.0695 04-MAR-16 01:09:55.3 1.02355 04-MAR-16 01:10:00.3 0.347305 <== bucket 2 04-MAR-16 01:12:40.3 0.0479042 04-MAR-16 01:16:30.3 13.0315 04-MAR-16 01:19:55.3 5.86707 04-MAR-16 01:20:00.3 7.02435 <== bucket 3 04-MAR-16 01:23:15.3 13.0088 04-MAR-16 01:26:30.3 0.11018 04-MAR-16 01:29:55.3 10.0192 04-MAR-16 01:30:00.3 4.9992 <== bucket 4 04-MAR-16 01:30:10.3 12.9633 04-MAR-16 01:39:10.3 0.0858283 04-MAR-16 01:39:55.3 5.35529 04-MAR-16 01:40:00.3 7.9497 <== bucket 5 04-MAR-16 01:41:10.3 0.0746507 04-MAR-16 01:42:45.3 13.0739 04-MAR-16 01:49:55.3 6.47385 04-MAR-16 01:50:00.3 4.71617 <== bucket 6 04-MAR-16 01:50:50.3 0.0518962 04-MAR-16 01:53:50.3 13.0455 04-MAR-16 01:59:55.3 6.06986 04-MAR-16 02:00:00.0 3.5418 <== Last sample of the timespan ---- Keywords: Best fit References: None
Problem Statement: How do you insert a blank line in the output of the Aspen SQLplus Web Reports (web-based reporting)?
Solution: In Aspen SQLplus Web Reports it's possible to build both interactive and automated reports section-by-section where tag data and trend plots can be displayed, among other things. If there needs to be a blank line between these sections, simply utilize a 'Title/Text' section but do not specify anything for the title or text itself. Leave it blank. This will generate a blank line in the report. Keywords: References: None
Problem Statement: Why do I get error message Disk history read error - 100 in Aspen SQLPlus. This error occurs using the Process Data Com Object and any other script executed would display the error message Disk history read error - 100
Solution: Add a new key in the registry and then add string values and value data to the new key as shown in the steps below. Note: Please Make a Backup of registry before continuing. (i) Open the Registry Editor (ii) In the registry editor go to [HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{710B32A1-7277-11D1-932C-00805F0F1C84} and add a new key named ProcessList (iii) In the ProcessList, add new String Value and the Value data as listed below String Value Value Data Keywords: None References: None
Problem Statement: It has been noticed that the following error message can occur when trying to connect to a remote Oracle database using SQLplus: Failed to connect to link <name of link> Specified driver could not be loaded due to system error 126
Solution: This error message results from an incomplete PATH environment variable. When the Oracle client is installed, it edits the environment variable PATH, adding the value C:\ORANT\BIN. (The drive letter could be different depending on location specified during the install) This new setting should be added to both the System and User PATH environment variables. However, occasionally the setting will not be added to the User''s PATH environment variable; this will cause the error message shown above. If you obtain this error, verify that C:\ORANT\BIN has been added to both the User and System PATH environment variables. If not, edit either, or both, variables so that they include the correct setting. (Directions to view and reset environment variables are given below.) In most cases, the problem has been that the User variable is not correctly set. After editing an environment variable, the Oracle client application will need to be restarted so that it is aware of the edited environment variables. To verify the correct environment variables are set: Right click on the My Computer icon and select Properties. Under the System Properties window, select the Environment tab Browse the System and User Variable lists until you find the Path Variable. Highlight Path so that the values for the Variable and Value fields at the bottom of the System Properties window fill in. The Value field should list multiple paths; confirm the Oracle client path by scrolling through the list until you see C:\ORANT\BIN (replacing the correct drive letter for your computer). If you find that the PATH environment variable exits, but does not include a setting for C:\ORANT\BIN you will need to include the value. To do this: Highlight the Path variable so that the Variable and Value fields at the bottom of the System Properties window fill in. Scroll through the list of values already set for the Path variable and add C:\ORANT\BIN. Separate it from the previous value with a semicolon. Click Set to save If the PATH environment variable does not already exist -- which may be the case in the User variable list -- you will need to first add it by creating a new variable. To do this: Click a section of white space under the environment variable list Type PATH as the name in the Variable field Set the value field to the correct Oracle client path (i.e. C:\ORANT\BIN). Click Set to save. Note: After editing or creating a new environment variable, you will need to restart your computer for the changes to take effect. Keywords: References: None
Problem Statement: What does the error Cannot follow chain to field mean when querying the contents of a record - usually the contents of the repeat area of that record.
Solution: The problem can usually be seen if viewing the record itself in the Aspen InfoPlus.21 Administrator Look for any fields in the fixed or repeat areas where the contents of that field are either '<<<<' signs or '>>>>' signs. You now need to find how that field is formatted. What you will probably find is that the field is formatted by a Ghost Selector record. As with all fields formatted by Selector Records there are a finite number of choices defined and that in this case the field with the strange signs has a value outside of the range of choices in the Ghost Selector record. The work-around is to modify/edit the field with the strange signs. Click on the drop down that shows the different choices and select a valid choice Keywords: CHAIN '>>>>>' '<<<<<' References: None
Problem Statement: This
Solution: s contains instructions on how to interrogate the WMI interface from SQLplus to read a registry key. WMI - Windows Management Interface is Microsoft?s implementation of DMTF WBEM ? DMTF = Distributed Management Task Force WBEM = Web Based Enterprise ManagementSolution The following example function can be saved in an SQLplus ProcedureDef record and called from any query. Function getReg(regPath,regKey) local oReg; local regVal; local HKEY_LOCAL_MACHINE; HKEY_LOCAL_MACHINE = 2147483650; oReg = GetObject('winmgmts:\\.\root\default:StdRegProv'); oReg.GetExpandedStringValue(HKEY_LOCAL_MACHINE,regPath,regKey,regVal); return regVal; end --Testing --write getReg('SOFTWARE\AspenTech\Cim-IO','CIMIODEF'); --write getReg('SOFTWARE\AspenTech\Setup','ASPENROOTDir'); Keywords: Registry WMI SQL References: None
Problem Statement: How can you use Aspen SQLplus to populate a selector record? The example below demonstrates how to populate a record, TestRec, defined by Select8Def.
Solution: Create TestRec, defined by Select8Def, using the InfoPlus.21 Administrator. In SQLplus, use this sample script to add new occurrences to the repeat area. SET EXPAND_REPEAT = 1; INSERT INTO TestRec.1 (select_description) VALUES ('val 0'); INSERT INTO TestRec.1 (select_description) VALUES ('val 1'); INSERT INTO TestRec.1 (select_description) VALUES ('val 2'); INSERT INTO TestRec.1 (select_description) VALUES ('val 3'); Now the repeat area for this record looks like this in the InfoPlus.21 Administrator: Occurrence Selection Value SELECT_DESCRIPTION 1 0 val 0 2 1 val 1 3 2 val 2 4 3 val 3 Keywords: References: None
Problem Statement: How to display times independent of Daylight Savings Time (DST) adjustments?
Solution: The following query illustrates how to display time-stamps disregarding the DST adjustments by using the function 'local_iso8601' in Aspen SQLPlus. Local Hrs_i INT, Mins_i INT, Offset_i INT, Sign_c CHAR, CST_Adjustment INT; write current_timestamp; write 'iso: ' || local_iso8601(current_timestamp); -- Determine Timezone Offset in Hours and Minutes from UTC Time. Hrs_i = SUBSTRING(local_iso8601(current_timestamp) FROM 28 FOR 2); write 'hrs_i: ' || Hrs_i; Mins_i = SUBSTRING(local_iso8601(current_timestamp) FROM 31 FOR 2); write 'Mins_i: ' || Mins_i; Offset_i = (Hrs_i*60)+Mins_i; write 'Offset_i: ' || Offset_i; -- Determine if Timezone Offset is plus or minus. Sign_c = SUBSTRING(local_iso8601(current_timestamp) FROM 27 FOR 1); IF Sign_c = '-' THEN Offset_i = Offset_i * -1; END; -- Determine # HOURS offset from 6 (which is CST): CST_Adjustment = (-6*60) - Offset_i; write 'CST_Adjustment: ' || CST_Adjustment; -- Show Time as CST: write 'Central Standard Time: ' || current_timestamp+(CST_Adjustment*10*60); Keywords: DST iso8601 References: None
Problem Statement: You may receive the error atConnections.GetServerList: Can't create ADODB.Recordset! when you go to View > Applications log in the Tag Browser. or You may open the Tag browser and see the servers list is empty as well as being unable to enter data into the name/description fields.
Solution: This is caused when the MSADO15.DLL becomes unregistered. This can happen when installing or upgrading Aspentech or Third Party software. To resolve this error you will need to register the MSADO15.DLL file. Go to 'START' button then click 'RUN' In the open window type in regsvr32 C:\Program Files\Common Files\System\ado\msado15.dll and press 'OK. Alternatively you can open up a MS DOS window and type in the above command, or type CMD in the Start > Run box to open up a MS DOS window. You should receive a pop up message to say DllRegisterServer in msado15.dll succeeded Keywords: Msado15.dll Aspen Process Explorer Tag Browser References: None
Problem Statement: The ODBC Data Source Administrator will usually have an entry beneath the System DSN tab for the 'AspenTech SQLplus' driver if one chooses 'SQLplus ODBC' during product installation. How can the AspenTech SQLplus Driver entry be restored so that it appears in the ODBC Data Source Administrator?
Solution: On an Operating System 32-bits AspenTech SQLplus ODBC driver 32-bits Register the ip21odbc.dll which is located in \Windows\System32. Information about using regsvr32 to register a .dll may be found here: http://support.microsoft.com/kb/249873 On an Operating System 64-bits TheSolution 120486 Problem with adding an Aspen SQLplus DSN in the ODBC Data Source Administrator explains about the AspenTech SQLplus ODBC driver 32-bits and the AspenTech SQLplus ODBC driver 64-bits. Based on your AspenTech SQLplus ODBC driver type one of the following procedures has to be executed: a) AspenTech SQLplus ODBC driver 32-bits Register the ip21odbc.dll which is located in \Windows\SysWOW64. The cmd.exe (DOS prompt 32-bits) is found in the same location. Information about using regsvr32 to register a .dll may be found here: http://support.microsoft.com/kb/249873 b) AspenTech SQLplus ODBC driver 64-bits Register the ip21odbc.dll which is located in \Windows\System32. The cmd.exe (DOS prompt 64-bits) is found in the same location. Information about using regsvr32 to register a .dll may be found here: http://support.microsoft.com/kb/249873 Keywords: ODBC ip21odbc.dll libc21.dll References: None
Problem Statement: What causes the message Failed to view automated report: Access to the path 'C:\Inetpub\wwwroot\AspenTech\SQLplus\temp\ip21_servername\report_name.mht' is denied?
Solution: The AspenTech SQLplus report writer sends automated reports to the folder C:\Documents and Settings\All Users\Application Data\AspenTech\SQLplus\output. When someone uses the AspenTech SQLplus report writer to display an automated report, the report writer copies the report from C:\Documents and Settings\All Users\Application Data\AspenTech\SQLplus\output to C:\Inetpub\wwwroot\AspenTech\SQLplus\temp\ip21_servername. If you get the following message, start by checking the IIS (Internet Information Services) authentication methods for SQLplus: When Windows displays the SQLplus properties page, select the Directory Security tab and click the Edit button for Anonymous access and authentication control. If the box Anonymous access is checked, check the security settings for C:\Inetpub\wwwroot\AspenTech\SQLplus\temp\ip21_servername, and ensure the Internet Guest Account has write access to the folder. If not using anonymous access, then make sure the account of the user requesting the report has write access to C:\Inetpub\wwwroot\AspenTech\SQLplus\temp\ip21_servername. Keywords: References: None
Problem Statement: Is it possible, when creating the connection string for the AspenTech.SQLPlus.DataProvider, to specify the data source name instead of the Host name? You can do this in the DSN less connection string for the ODBC driver, but there seems to be no option to do this for the AspenTech.SQLPlus.DataProvider.
Solution: The AspenTech.SQLPlus.DataProvider doesn't support data source name in the connection string. However, you can work around this by using an additional assembly AspenTech.ADSA.Locator as shown in the sample codes below. Add reference to both the AspenTech.ADSA.Locator and AspenTech.SQLplus.DataProvider assemblies. sample code: class Program { private static string m_SQLplusCatID = {79f29695-5113-11d3-9ba4-00e02905d02b}; private static string m_DatasourceName = your_datasource; static void Main(string[] args) { AspenTech.ADSA.Locator.IAtDataSourceLocator2 locator = new AspenTech.ADSA.Locator.DsaLocatorClass(); AspenTech.ADSA.Locator.IAtPropertyBagEx props = (AspenTech.ADSA.Locator.IAtPropertyBagEx)locator.QueryDataSourceProperties(m_SQLplusCatID, m_DatasourceName); AspenTech.SQLplus.SQLplusConnectionStringBuilder MyConnection = new AspenTech.SQLplus.SQLplusConnectionStringBuilder(); MyConnection.Host = (string)props.Read(Host); MyConnection.Port = Convert.ToString(props.Read(Port)); string CommandText = SELECT name FROM analogdef; IDbDataAdapter adapter = new AspenTech.SQLplus.SQLplusDataAdapter(CommandText, MyConnection.ConnectionString); DataSet ds = new DataSet(); adapter.Fill(ds); string xml = ds.GetXml(); Console.WriteLine(xml); } } Keywords: AspenTech.SQLPlus.DataProvider References: None
Problem Statement: This Knowledge Base article shows how to include a company logo in an Aspen SQLplus web report e-mail.
Solution: Below are the steps to add an image to an SQLplus web report: 1. Start Internet Explorer. 2. Type the url: http://localhost/sqlplus 3. Select the Title/Text section type. 4. Type a section name. 5. Click Add. 6. Type some text as needed. 7. Select an image. The image must be located in the following folder: ..\Program Files\Common Files\Aspentech Shared\SQLplus\xsl\logos. 8. Click OK. 9. Click Automate. 10. Select an Aspen InfoPlus.21 server. 11. Type an Automated Report name. 12. Add an e-mail address. 13. Add a schedule time. 14. Click OK 15. Wait for the e-mail to arrive. The e-mail contains a report with the embedded image. Keywords: email gif jpg jpeg References: None
Problem Statement: Typically production batches last less than a day. However sometimes they are longer. It may not be convenient to test a batch configuration by waiting until enough production data accumulates into the history system over time.
Solution: The following script manually inserts history into IP.21 from which a batch configuration could then be tested by running the BCU over the inserted data. To use this script you need to: Make sure that the IP21 historian has filesets old enough to accept your generated data. You would need to check that the tag which you are using can accept the old data. The xoldestok.exe utility will allow you to modify the oldest history data that is acceptable to IP21 for that particular tag. (Solution 103040 contains detailed instructions on using the xoldestok utility.) Below is a sample script that manually insert history data, based on batch duration and the number of batch that you want to run/simulate. You can also specify the data frequency. IMPORTANT NOTE: This script is provided as a starting point for someone who likes the idea of simulating batch data for testing purposes, and is already comfortable programming with SQLplus. The script is not officially supported as an Aspen product. SQLPLUS Script local enddate timestamp; -- end date of simulation local startdate timestamp; -- start date of simulation local highvalue real; -- value when batch is running local lowvalue real; -- value when batch is not running local batchduration real; -- duration of each batch in no of days local noofbatchsimulated integer; -- number of batch to be simulated local offbatchduration real ; -- duration of idle time between batches local datascanrate integer; -- ''scan rate frequency'' of data local batchstatus character(10); -- batch status (RUN/PAUSE) local secperday integer; -- no of seconds per day local reply character(1); -- (Y/N) local counter real; local tmpcountresult real; -- user input enddate = ''04-Sep-02 12:00:00''; highvalue = 500; lowvalue = 10; batchduration = 2.5; noofbatchsimulated = 3; offbatchduration = 0.01; macro iprecord = ''Analog1''; -- name of ip21 record to store simulated data -- program secperday = 24 60 60; -- calculate the startdate of batch startdate = enddate - ((noofbatchsimulated * batchduration * secperday * 10 ) + (noofbatchsimulated * offbatchduration * secperday * 10)); reply = ''W''; while not (reply = ''N'' or reply = ''Y'') do reply = prompt (''Ensure that your history can accept data starting from '' || startdate || chr(10) || ''Ok to proceed (Y/N) ?''); if reply = ''Y'' then while startdate < enddate do -- off pause counter = 0; batchstatus = ''PAUSE''; write ''batch paused at '' || startdate; tmpcountresult = offbatchduration * secperday / 10; while counter < tmpcountresult do insert into &iprecord(ip_trend_time, ip_trend_value, ip_trend_qstatus) values (startdate, lowvalue, ''GOOD''); startdate = startdate + 100; counter = counter + 1; end; -- batch run counter = 0; batchstatus = ''RUN''; write ''batch run at '' || startdate; tmpcountresult = batchduration * secperday / 10; while counter < tmpcountresult do insert into &iprecord(ip_trend_time, ip_trend_value, ip_trend_qstatus) values (startdate, highvalue, ''GOOD''); startdate = startdate + 100; counter = counter + 1; end; end; end; end; Keywords: SQLplus Batch.21 Batch Generate References: None
Problem Statement: Inconsistent aggregates results as shown in the screen shot attached below are returned by SQLplus with respect to the number of decimal places displayed for F10.5 Source Tags when executing a SELECT / FROM aggregates Query. QUESTION: How can I control the displayed decimal digits for the data output results returned by any query? ** For Aggregates specific example, refer to EXAMPLE 2 provided below.
Solution: The USING keyword command can be used after an expression in the SELECT statement list to specify an InfoPlus.21 format record for controlling the number of decimal digits that is displayed for the query output results. The specified InfoPlus.21 format record is used to format all values in the column on which the USING command is used, including any output results computed using the CALCULATE command. The full list of usable query syntax options allowed for a item in a SELECT statement that includes the USING command option is as follows: expression [[AS] alias] [USING format_record] [WIDTH int] [LEFT | RIGHT | CENTRE | CENTER] [break_clause | calculate_clause] EXAMPLE 1: Simple SELECT with USING command on the Trend Value for an IP_AnalogDef Record. SELECT IP_Trend_Value USING 'F7.3' from ATCAI; EXAMPLE 2: Aggregates SELECT query including USING command to force a specified format for the results to consistency in the number of displayed decimal digits. SELECT NAME Width 7, TS Width 22, PERIOD Width 14 Center, AVG USING 'F7.4', MAX USING 'F7.2', MIN USING 'F7.2', CAST (SUM AS INTEGER USING 'I3') AS SUM Width 8 Center FROM aggregates WHERE Name = 'ATCAI'; EXAMPLE 3: Simple SELECT against IP_AnalogDef including USING, BREAK, and CALCULATE commands for Tag Names Like ATC%. SELECT Name Width 7 BREAK SKIP 2, IP_Trend_Value AS Value USING 'F10.4' Width 12 CALCULATE COUNT AND AVG AND MIN AND MAX AND SUM, PAD(BOTH IP_Trend_Time TO 24) AS TimeStamp Center FROM IP_AnalogDef WHERE Name LIKE 'ATC%' AND IP_Trend_Time BETWEEN '02-JUL-10 14:45:00.0' AND '02-JUL-10 14:45:30.0'; IMPORTANT: In the query results shown below that were returned by the query noted above, note that the command BREAK SKIP 2 adds two blank rows between the results set returned for each IP_AnalogDef Tag and the data format specified by the USING 'F10.4' command is applied to both the retrieved history results as well as the CALCULATE results for the calculated average, minimum, maximum, and sum. Keywords: aggregates BREAK CALCULATE decimal digits format USING References: None
Problem Statement: The TRIM function appears to not work when converting a 4-digit year to a 2-digit year Running this query: local styear char(5); styear = Trim(leading 20 from 2001); write styear; styear = Trim(leading 20 from 2002); write styear; styear = Trim(leading 20 from 2003); write styear; Gives this result: 1 3 The original analysis was performed with 3 separate queries and it was interpreted that the 2002 query failed. Looking at the results more carefully, the results are consistent. The 2001 and 2003 queries trim one 2 and two 0's. The 2002 query trims two 2's and two 0's. This is the way the TRIM function is designed to work. It removes any characters in the specified set of characters up to the point where a character not in the set is encountered. So Trim(leading '20' from '2002'); removes any leading or trailing 2 or 0 characters and this gives an empty string. You would also get an empty string for 2000, 2020 or 2022. You would also get the same result using the leading or trailing modifiers Here are some more examples of how TRIM works: TRIM(LEADING '20' FROM '2002203') = '3' TRIM(LEADING '20' FROM '20023200') = '3200' TRIM('20' FROM '20023200') = '3'
Solution: Text will be added to the SqlPlus help file in v4.1 Service Pack 1: If trim_chars contains more than one character, TRIM removes any of the specified characters in what ever order they appear. e.g.: TRIM(LEADING '20' FROM '2002203') = '3' Work Around: The substring function is better suited to changing a 4-digit year into a 2-digit year: local styear char(5); styear = substring('2002' from 3 for 2); write styear; Result: 02 If you wanted to remove the leading zero, you could add a trim for this: local styear char(5); styear = trim(leading '0' from substring('2002' from 3 for 2)); write styear; Result: 2 Note that this differs from styear = substring('2002' from 4 for 1); which would only work until 2009. Keywords: References: None
Problem Statement: I have an Aspen SQLplus query/procedure that I use to validate the data entry that gets assigned into Aspen InfoPlus.21 for Analog and Discrete tags, and I also want to include a query in my procedure to validate whether providing a specified selector or format record for IP_VALUE_FORMAT is valid based on the default list of available format selections that Aspen InfoPlus.21 provides in the IP_VALUE_FORMAT drop-down selection control pick lists for IP_AnalgDef and IP_DiscreteDef.
Solution: -- Steps to retrieve IP_AnalogDef value selections for IP_VALUE_FORMAT -- NOTE: IP_AnalogDef uses the field record named DISPLAY_WHOLE_DIGITS as its SEARCH_KEY_RECORD, and this is what controls the default selections available for IP_VALUE_FORMAT in the Aspen InfoPlus.21 Administrator for IP_AnalogDef tags: Verification procedures for required query: 1. Run a query to verify if there are any other field records that use the same FIELD_NUMBER (hex) of 1018 that is used by the DISPLAY_WHOLE_DIGITS field record. Select NAME from FieldLongNameDef where FIELD_NUMBER (hex) = '1018'; ? DISPLAY_WHOLE_DIGITS is the only value result returned by the above query, which confirms that the IP_VALUE_FORMAT selection list for IP_AnalogDef is controlled only by records that contain the field record DISPLAY_WHOLE_DIGITS as part of their configuration. 2. Run a second query to verify which definition records include DISPLAY_WHOLE_DIGITS as part of their configuration for the #_OF_FIELDS_IN_REC for the records defined by the definition record. Select NAME from DefinitionDef where FIELD_NAME_RECORD = 'DISPLAY_WHOLE_DIGITS'; ? RealFormatDef is the only definition record name returned by the above query. Query to retrieve the default IP_VALUE_FORMAT selections Aspen InfoPlus.21 uses for IP_AnalogDef tags: Based on the verification tests discussed above, then the query now required for retrieving the matching list of default IP_VALUE_FORMAT selections used by the Aspen InfoPlus.21 Administrator for IP_AnalogDef tags is as follows: Select NAME from ALL_RECORDS where Definition = 'RealFormatDef'; Results: NAME ------------------------ E3E2 E5E2 E5E3 F 5. 2 F 6. 1 F 6. 2 F 6. 3 F 7. 2 F 7. 3 F 7. 4 F 7. 5 F 9. 3 F10. 3 F10. 4 F10. 7 F15. 0 F12. 7 F15. 8 F22.11 F 7.1 F 9.0 -- Steps to retrieve IP_DiscreteDef value selections for IP_VALUE_FORMAT -- NOTE: IP_DiscreteDef uses the field record named DISPLAY_RADIX_CODE as its SEARCH_KEY_RECORD, and this is what controls the default selections available for IP_VALUE_FORMAT in the Aspen InfoPlus.21 Administrator for IP_DiscreteDef tags. Verification procedures for required query: 1. Run a query to verify if there are any other field records that use the same FIELD_NUMBER (hex) of 1006 that is used by the DISPLAY_RADIX_CODE field record. Select NAME from FieldLongNameDef where FIELD_NUMBER (hex) = '1006'; ? This time the select query provided above returns multiple value results, which means that the IP_VALUE_FORMAT selection list in this case for IP_DiscreteDef tags is controlled by several selector and format records that belong to multiple Definition Records each of which includes one or more of the field records listed below as part of the defined record configuration for the #_OF_FIELDS_IN_REC. Results: NAME -------------------- DISPLAY_RADIX_CODE #_OF_SELECTIONS IP_#_OF_EU_NAMES IP_#_OF_EU_TYPES #_OF_ALRMS #_OF_AREAS #_OF_DISPS #_OF_PLOTS #_OF_RPRTS #_OF_UNITS Q_#OF_FIXED_COMMS Query to retrieve the default IP_VALUE_FORMAT selections Aspen InfoPlus.21 uses for IP_DiscreteDef tags: Since the default IP_VALUE_FORMAT selection list used by the Aspen InfoPlus.21 Administrator for IP_DiscreteDef tags represents a combination of selector and format record selection types controlled by multiple Field Name and definition records, then a set of nested FOR-DO loops with embedded select statements are required to retrieve the desired selection results. The required query as tested is included below (it should return approx. ~ 238 record names): IMPORTANT: You can also customize the query provided below as you wish to include any added IF statements you need for value comparison, and you can also use the 'commented' command lines shown in green text to store the list values to an array and write them out using the WRITE statement as needed. --Local arrRecCount INTEGER; --Local x, RecNameSelection; DECLARE LOCAL TEMPORARY TABLE MODULE.FORMAT_RECORD_NAMES(Format_Record_Name CHAR(50), Definition_Record_Name CHAR(50)); SET LOG_ROWS = 0; --arrRecCount = 0; FOR (Select NAME as FieldRecName from FieldLongNameDef where FIELD_NUMBER (hex) = '1006') DO FOR (Select NAME as DefRecName from DefinitionDef where FIELD_NAME_RECORD = FieldRecName) DO FOR (Select NAME as IPValueFormatRecName from ALL_RECORDS where Definition = DefRecName) DO --Redim(RecNameSelection, arrRecCount); --RecNameSelection[arrRecCount] = IPValueFormatRecName; INSERT INTO MODULE.FORMAT_RECORD_NAMES VALUES (IPValueFormatRecName, DefRecName); --arrRecCount = arrRecCount + 1; END END END --FOR EACH x IN RecNameSelection DO -- WRITE x; --END Select Format_Record_Name, Definition_Record_Name from MODULE.FORMAT_RECORD_NAMES ORDER BY Definition_Record_Name, Format_Record_Name; Keywords: None References: None
Problem Statement: How do you select averages from the aggregates pseudo-table ignoring trend values that are beyond a threshold limit?
Solution: The Aspen SQLplus aggregates pseudo-table does not allow you to filter trends before selecting averages. TheSolution is to first define an Aspen Calc shared ad-hoc calculation in the Aspen InfoPlus.21 database to perform the filtering you need. For example, suppose you want to calculate averages for ATCAI after eliminating readings less than 12. First define a record against IP_CalcDef named ATCAIThreshold. Set the field #CALC_LINES to 1, and expand the repeat area. Enter the following into the field CALC_LINE in the repeat area: =IF {ATCAI} < 12.0 THEN null ELSE {ATCAI} This defines an Aspen Calc shared ad-hoc calculation that returns a null value if ATCAI is less than 12. Note: Aspen Calc does not have to be installed for shared ad-hoc calculations to work. Finally, create a query similar to the following that accesses the aggregates pseudo-table through the shared ad-hoc calculation: select ts as Time Stamp, avg as ATCAI Five|Minute Average from aggregates where name in ('ATCAIThreshold') and ts between '10:00' and '10:30' and period = 00:05:00; The aggregates pseudo-table ignores the NULL values returned by ATCAIThreshold when calculating averages. If all the trend values for a time period are less than 12, the query returns 0. Note: The query returns values only if using the syntax: where name in ('ATCAIThreshold'). The statement where name ='ATCAIThreshold' causes the query to return No rows selected. Keywords: aggregates threshold shared ad-hoc calculation References: None
Problem Statement: From
Solution: 103115, you can create a second instance of TSK_SQL_SERVER task and configure it to generate a read only connection.. In order for Automated Aspen SQLplus Web Report Tool to work properly you cannot connect to an ADSA Datasource where the port number for the Aspen SQLplus Service Component service is the one used in a read only connection. If the existing ADSA datasource is configured to point to the read only connection, you will have to create a new datasource for Aspen SQLplus Web Report Tool. Solution Create a new DataSource pointing to the read/write connection, to do this: 1. Go to the Aspen DataSource System Architecture (ADSA) configuration tool 2. Edit the Public Data Sources Or User Data Sources List (depending on which you are using), and add a new ADSA Datasource. 3. This new Datasource will have the standard services, but the Aspen Sqlplus Service component MUST use the port number configured in the instance of TSK_SQL_SERVER designed for Read and Write connections. NOTE: This port number is found in the command line parameter of the TSK_SQL_SERVER task as well as in the services file located in: C:\WINDOWS\system32\drivers\etc\ From now on, everytime you use the Aspen SQLplus Web Report tool, you will have to select this new ADSA Datasource. Keywords: New port Web Report Tool References: None
Problem Statement: How can I update the Pending Number of Points for a selected Repository using Aspen SQLplus?
Solution: Included File Attachments: Set Pending Number of Points.SQL You can use the attached SQL script noted above to update the Pending Number of Points for a selected Repository via the Aspen SQLplus Query Writer. IMPORTANT: In order to be able to use the AtIP21HistAdmin COM Objects, then on the machine used to execute the script you must have the Aspen InfoPlus.21 History Admin 1.0 Type Library Keywords: AtIP21HistAdmin Pending Number of Points Please enter a valid UNC pathname \nodename\folder\folder Remote History Administration Repository UNC Paths References: checked and enabled in the Aspen SQLplus Query Writer for the selected SQLplus References. And you can set this option by going to menu bar and clicking the View | References... to pull up the dialog shown below. The attached SQL script contains the following command text: Local objHistAdmin AtIP21HistAdmin, ObjRepository AtIP21SingleRepository; Local RepName CHAR(24); Local CurrentPoints DOUBLE, PendingPoints DOUBLE; RepName = 'TSK_DHIS'; --Set Append = 'C:\HistAdminCOMTest.txt'; --Command line above is for a optional output file, and if enabled then ALL Status Messages --and Write Statement Results get written to the output file objHistAdmin = New('AtIP21HistAdmin'); -- alternate usage see SQLplus Help -- objHistAdmin = CreateObject('Aspen.IP21.AtIP21HistAdmin'); ObjRepository = objHistAdmin.GetSingleRepository(RepName); CurrentPoints = ObjRepository.NumberOfPoints; Write 'Current Number of Points Assigned for the ' || RepName || ' Repository = ' || CurrentPoints; Write ''; ObjRepository.PendingNumberOfPoints = 1500; ObjRepository.Update; PendingPoints = ObjRepository.PendingNumberOfPoints; Write 'Current Number of Pending Points Assigned for the ' || RepName || ' Repository Now = ' || PendingPoints; Write ''; Write 'You MUST now Restart IP.21 in order to complete and apply the pending Repository Points Change.';
Problem Statement: The following error message is returned when executing a query: SYSTEM commands, CREATEOBJECT and file writes disabled at line ###. For example:
Solution: 1 This error is a result of the implementation of Aspen SQLPlus security, and the user executing the query not having permission granted for the System Command object. The query contains a SET OUTPUT command which is a permission of the System Command object. System Command allows users with this privilege to: 1. execute SYSTEM commands 2. execute CreateObject and GetObject functions 3. execute SET OUTPUT statements when used to set output to a file. 4. execute New functions if the type library has System as the security setting. 5. modify database link definitions from the Aspen SQLplus Query Writer. The user must belong to a role with System Command permission. To do this, Open the 'AFW Security Manager' and select the Aspen InfoPlus.21 server name: Right-click on the server name ('SGCHSAM' in this example) and click on the 'Properties'. Grant 'System Command' permission to the desired application role. Click on the <Ok> button when done. ** You can either wait for the AFW security cache to be refreshed or manually refresh it via the 'AFW Security Client Tool'. For Aspen SQLplus Query Writer, you will need to close the Aspen SQLplus client application and re-open it. Solution 2 If you have implemented a read-only Aspen SQLplus port, then this error may be caused by a mismatch of read/write ports. Check the port used for TSK_SQL_SERVER against the ports used in the Aspen SQLplus client. In task TSK_SQL_SERVER, administrator may use parameter 'c' to disable SYSTEM commands. For example, TSK_SQL_SERVER with port 10014 was started with a 'c' parameter. This disables SYSTEM commands to be executed when a Aspen SQLplus client connects via this port number. Keywords: Sqlplus Query Writer system commands References: None
Problem Statement: How do I catch errors on Remote Connections in an Aspen SQLPlus query?
Solution: Catch a remote connection error with a BEGIN-EXCEPTION-END block, the remote query must be started as a sub-query. BEGIN START Record'OracleQuery'; EXCEPTION ERROR ERROR_CODE,ERROR_TEXT; END Keywords: Failed to connect to link MYLINK [MicroSoft][ODBC SQL Server driver][TCP/IP Socket] Specified SQL Server not found. References: None
Problem Statement: The error Unknown function: Function_Name at line x is received in Aspen SQL Plus
Solution: This problem may be due to a missing registry entry on the Aspen InfoPlus.21 server that you are running the query against. 1. From the Windows Start menu choose Run. 2. Type regedit and select OK. 3. Under the path HKEY_LOCAL_MACHINE\SOFTWARE\AspenTech\InfoPlus.21\SQLplus there should be a registry key called External Functions with the string value GetPermis shown in the screen shot below. Keywords: Aspen SQL Plus; GETDBPERMIS; GETRECPERMIS; GETFLDPERMIS; GETWRITELEVEL; References: None
Problem Statement: How can I schedule an Aspen SQLplus query record to run periodically?
Solution: 1. Create a query record in the IP.21 Administrator. o For this example, the record name will be daily_report. o Use querydef for the definition record. o Set the external_task_record field to TSK_IQ1 o Make the record USABLE. 2. Start the SQLplus Query Writer client tool. o Create the query. The example below selects all names and input values for records defined by ip_daidef and saves the data in the file daily_report_data. set output 'c:\temp\daily_report_data'; select name, ip_input_value from ip_daidef; o Click on the Record menu and select Save As. Select daily_report from the record name list. Set the definition to QueryDef. Click OK. 3. Modify the daily_report record in the IP.21 Administrator. o Set the Schedule_Times repeat area to 1. o Set the Schedule_Time to the first time you want to run the query (e.g. 12:00:00 to run it first at noon). o Set the Reschedule_Interval to the frequency this query should be run (e.g. 0:01:00:0 to run it every hour). Keywords: schedule scheuleactdef querydef References: None
Problem Statement: Is there a way to format large numbers with commas? For example, can you display 1,000,000 instead of 1000000?
Solution: Formatting large numbers with commas is possible using the function comma_format: function comma_format(num string) local newstr string; local decstr string; local minusstr string; local x string; local ndx int; -- -- Check if string is an integer or a floating point number. -- Save the decimal portion if this is a floating point number -- ndx = position('.' in num); if ndx = 0 then x=num; decstr = ''; else x=substring(num from 1 for ndx-1); decstr = substring(num from ndx for character_length(num)); end -- -- Check if the string represents a negative number. If so, save the -- minus sign -- ndx = position('-' in x); minusstr = ''; if (ndx = 1) then x = substring(x from 2 for character_length(x)); minusstr = '-'; end -- -- Insert commas -- while character_length(x) > 3 do newstr = ','||substring(x from character_length(x)-2 for 3)||newstr; x = substring(x from 1 for character_length(x)-3); end newstr = x||newstr; -- -- Put the string back together -- return minusstr||newstr||decstr; end write comma_format( -1234567.89); Keywords: comma format References: None
Problem Statement: Is it possible to use a VIEW to read a text file and perform a SELECT on specific tags?
Solution: This article provides a sample SQLplus query. This query allows the user to perform a SELECT statement (using AGGREGATES) on a list of specific tags. The list of tags in this example is stored as a text file. First, the text file must be created. A file containing the following tag names from the demo database, is saved under the name 'sqltest.txt' in root of the C drive. ATCAI ATCDAI ATCF101 ATCF102 ATCF201 ATCF202 ATCF301 Next, create an SQLplus VIEW that pulls the tag names from the text file. In the following sample code, the view is named dsview. CREATE VIEW dsview AS SELECT SUBSTRING (1 of LINE) AS tagname FROM 'c:\sqltest.txt'; It is then possible to create and run an SQL query against the view. The following sample query accesses the view to retrieve the list of tags. A select statement is then run against each tag name in the view. The query FOR SELECT * FROM dsview DO SELECT name, ts, avg FROM aggregates WHERE name=tagname and period = '01:00:00' and ts > '21-jul-05 08:00:00'; END will return results in the following format. name ts avg ------------------------ -------------------- -------------- ATCAI 21-JUL-05 09:00:00.0 6.49358 ATCAI 21-JUL-05 10:00:00.0 6.48275 ATCAI 21-JUL-05 11:00:00.0 6.57708 ATCAI 21-JUL-05 12:00:00.0 6.46627 name ts avg ------------------------ -------------------- -------------- ATCDAI 21-JUL-05 09:00:00.0 3.24676 ATCDAI 21-JUL-05 10:00:00.0 3.24112 ATCDAI 21-JUL-05 11:00:00.0 3.28858 ATCDAI 21-JUL-05 12:00:00.0 3.23347 name ts avg ------------------------ -------------------- -------------- ATCF101 21-JUL-05 09:00:00.0 30.0125 ATCF101 21-JUL-05 10:00:00.0 30 ATCF101 21-JUL-05 11:00:00.0 40 ATCF101 21-JUL-05 12:00:00.0 0 name ts avg ------------------------ -------------------- -------------- ATCF102 21-JUL-05 09:00:00.0 13.388 ATCF102 21-JUL-05 10:00:00.0 11.78 ATCF102 21-JUL-05 11:00:00.0 15.1806 ATCF102 21-JUL-05 12:00:00.0 0.0948567 name ts avg ------------------------ -------------------- -------------- ATCF201 21-JUL-05 09:00:00.0 50.0208 ATCF201 21-JUL-05 10:00:00.0 50 ATCF201 21-JUL-05 11:00:00.0 66.6667 ATCF201 21-JUL-05 12:00:00.0 0 name ts avg ------------------------ -------------------- -------------- ATCF202 21-JUL-05 09:00:00.0 0 ATCF202 21-JUL-05 10:00:00.0 0 ATCF202 21-JUL-05 11:00:00.0 0 ATCF202 21-JUL-05 12:00:00.0 0 name ts avg ------------------------ -------------------- -------------- ATCF301 21-JUL-05 09:00:00.0 33.3333 ATCF301 21-JUL-05 10:00:00.0 66.6667 ATCF301 21-JUL-05 11:00:00.0 50 ATCF301 21-JUL-05 12:00:00.0 0 Keywords: VIEW aggregates References: None
Problem Statement: Aspen Cim-IO transfer records are not updating at the setting in IO_Frequency. Manual or scheduled polling of data works as expected, but when enabling S&F, only ONE initial datapoint comes in.
Solution: Most of the time this is a configuration issue. Here is a list of configuration items to verify. In this case, synchronous data transfer works fine (manual polling and initialization uses the synchronous main task), which means that we need to check and correct the settings for Asynchronous communication: TSK_A_xxx is running in the Aspen InfoPlus.21 Manager with the correct executable: cimio_c_async.exe TSK_A_xxx has a corresponding External Task record defined and linked to the correct logical device record? The logical device record has IO_Async set to YES, and the correct TSK_A_xxx defined. The logical device record has IO_Store_Enable? set to YES. The transfer record has IO_Async? set to YES and a positive IO_Frequency defined After verifying the configurations above if there is still a problem then try this: ? Stop the Aspen Cim-IO client tasks TSK_M_xxx, TSK_A_xxx, TSK_U_xxx In the Logical Device record set IO_Record_Processing to OFF then ON Re-start the Aspen Cim-IO client tasks If this does not resolve the issue then perform a Clean start of Aspen Cim-IO as outlined in KB article #103176 Keywords: Store & Forward IO_Frequency Asynchronous Wait for Async References: None
Problem Statement: Timestamps stored in the InfoPlus.21 database have microsecond precision. However, standard SQLplus queries have the re
Solution: at TS20 (10th of a second). How can a database timestamp value with the microsecond precision be returned on querying the database? Solution The way to display a database timestamp value with the microsecond precision is to cast the string as character format. For example (for tag name = atcai): select cast(ip_trend_time as char using 'TS25') from atcai.1 where ip_trend_time > '30-jul-03 10:06:05' and ip_trend_time < '30-jul-03 10:10:40'; This produces output such as: CAST(ip_trend_time AS CHARACTER USING 'TS25') 30-JUL-03 10:10:35.300000 30-JUL-03 10:10:30.900000 30-JUL-03 10:10:25.300000 30-JUL-03 10:10:20.300000 30-JUL-03 10:10:15.300000 Keywords: References: None
Problem Statement: Sometimes, when receiving text files to be read into IP.21 or Setcim via SQLplus, there are errors in the data. These errors will cause the INSERT INTO, UPDATE, or whatever statement you happen to be using to fail. For example, if you''re reading in numeric values from a text file and your file has UNDEF in that field for various records, this would cause the statement to fail because of the data type violation. Yet, for the system providing the file, this is a legitimate value.
Solution: There is a way to basically skip past those bad entries and a null would be returned instead of an error. This allows the entire file to be read without errors. Use a Set command of CONVERT_ERRORS to accomplish this. CONVERT_ERRORS is a toggle of 0 and 1 where 1 is the default and will show the errors. To turn that off, simply use SET CONVERT_ERRORS 0; Below is an example: Incoming file contains tag names and values - looks like this - numbers.txt: tag1 67.8 tag2 57.9 tag3 undef tag4 100.4 tag5 2.6 tag6 undef tag7 42.8 We are trying to update the tags with the new values with the following code: SET CONVERT_ERRORS 0; UPDATE ip_analogdef, ''numbers.txt'' SET ip_input_value = SUBSTRING (2 OF LINE) WHERE name = SUBSTRING (1 OF LINE); Without the SET CONVERT_ERRORS statement, an error would have occurred on the 3rd record and execution would be stopped. Now, the 3rd and 6th tags will have a null value in their ip_value (which is ?????). Keywords: CONVERT_ERRORS bad data input file References: None
Problem Statement: A common need in reporting is to determine statistics on a certain tag while a condition on another tag is true (i.e. a valve is open, etc.). This can be done with JOINs in SQLPlus.
Solution: For example, to get the average value of tag F101 for all times when V101 = 1, the following query would work: select avg (F1VALUE) from (SELECT NAME,TS,VALUE AS V1VALUE FROM HISTORY WHERE (NAME ='V101') AND (PERIOD = '0:00:15') AND (STEPPED='1') AND TS between '1-JUN-04 10:00:00' and '3-JUN-04 10:00:00' JOIN (SELECT NAME,TS,VALUE AS F1VALUE FROM HISTORY WHERE (NAME = 'F101') AND (PERIOD = '0:00:15') AND (STEPPED='0') AND TS between '1-JUN-04 10:00:00' and '3-JUN-04 10:00:00') USING (TS)) where V1VALUE = 1 This works by JOINing a select of values for V101 with a select for values for F101 on the field TS. Since we are calling the history table, we are assured that the times will match exactly. The two JOINed selects return the following result: AVG(F1VALUE) 98.8775 Keywords: References: None
Problem Statement: After an upgrade of Microsoft SQL Server or the application of a Microsoft patch the following error could occur and prevent connection to Aspen SQLPlus: SQLplus - Qualifiers: Failed to connect to link SQLSERVER [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection.
Solution: In the event of an occurrence as described above, the following steps should be evaluated one at a time in resolving the connectivity issue. 1. Check to see if the TSK_SQL_SERVER is present and running in Aspen InfoPlus.21 Manager. If not present, manually recreate the task as shown in the figure below: a. Open the InfoPlus.21 Manager and in the New Task Definition window, under Task name (TSK_XXXX) type in TSK_SQL_SERVER. b. Under Executable, type in the executable path and save it. (the path may vary depending on the drive where InfoPlus.21 is installed on your system). In the example above the directory path is: C:\Program Files\AspenTech\InfoPlus.21\db21\code\sqlplus_server.exe 2. Check to see if either the upgrade or the installation of an upgrade has changed the version of the ODBC driver. If this is the case, update the ODBC driver and create a new database link between Microsoft SQL Server and Aspen SQLPlus using the Microsoft ODBC DSN. 3. If none of the above steps resolves the connectivity issue, verify in the registry that the registry key: HKLM\SOFTWARE\ODBC\ODBC.INI\LocalServer\Trust_Connection variable has not been changed from a YES to a NO. If it has, reset this to a YES. Keywords: ODBC DSN SQL SQLPlus environment References: None
Problem Statement: Getting error Failed to open SETCIMLINKS at line 1 when running the following query for an IP_AnalogDef tag: INSERT INTO bob.test (ip_trend_value, ip_trend_time, ip_trend_qstatus) VALUES (98.6, ''28-JAN-02 09:30:00'', ''GOOD'');
Solution: If the name of the tag contains a dot, you must use double quotes around it. For example: INSERT INTO bob.test (ip_trend_value, ip_trend_time, ip_trend_qstatus) VALUES (98.6, ''28-JAN-02 09:30:00'', ''GOOD''); Keywords: setcimlinks setcim link setcim links insert update history References: None
Problem Statement: Attempting to save a query to a custom external task is not possible because I cannot visualize my new task in the list.
Solution: Aspen SQLPlus external tasks must follow the naming convention TSK_IQx such as TSK_IQ1, TSK_IQ2, etc. You have to manually type in the external task name in case yours do not follow this rule. Keywords: TSK_IQ1 External Tasks References: None
Problem Statement: Some tags are periodically reset by sticking a new value of '0' into the tag history, even though it does not reflect the value of the measurement that the tag represents at that time. When the range of the tag is calculated for a certain time span, the range will include that '0' measurement and represent an inaccurate range for that data set.
Solution: To work around the issue where the range function does not correctly calculate the true range of values in the tag, a query can be used to work around this issue. The query will first calculate the maximum value in the range before calculating the minimum non-zero value in the range and subtracting the two values to arrive at the range. The sample query for the range of non-zero values for the tag ATCAI since the beginning of the day is shown below: local MAX_VAL real, MIN_VAL real, Range real; MAX_VAL = (select max(IP_TREND_VALUE) from ATCAI where IP_TREND_TIME > '00:00:00'); MIN_VAL = (select min(IP_TREND_VALUE) from ATCAI where (IP_TREND_TIME > '00:00:00') and IP_TREND_VALUE > 0); Range = MAX_VAL - MIN_VAL; write Range; Keywords: References: None
Problem Statement: Indirection used on a remote IP.21 database (in our example the link is ip21) gives the error invalid RECORD value: test at line 1. Using indirection, exactly the same locally on that IP.21, works fine. Why? Example query - this works: SELECT name,definition,name->ip_plant_area FROM all_records AS b WHERE b.definition = ''ip_analogdef''; Example query - this doesn''t work and gets error: SELECT name,definition,name->ip_plant_area FROM ip21.all_records AS b WHERE b.definition=''ip_analogdef'';
Solution: SQLplus indirection is always on the local InfoPlus.21 system, it is never passed to a remote server (database link). So, the example SQLplus statement is reading the record name from a remote server but then using record indirection on the local server. If the same record doesn''t exist on the local server, you will get the invalid record error. The way around this is to create a view on the remote server and select from that view, e.g. CREATE VIEW view1 AS SELECT name,definition,name->ip_plant_area area FROM all_records; -- On the remote database. then SELECT * FROM ip21.view1; -- On the local database. The actual example doesn''t need to even do that. It could be changed to: SELECT name,definition,ip_plant_area FROM ip21.ip_analogdef; Keywords: indirection view References: None
Problem Statement: How do I display values formatted by selector records as integers?
Solution: Use the statement cast(value as integer) as shown in the following example: write cast(testtag.IP_INPUT_VALUE as integer) Keywords: OFF/ON Discrete Query cast selector References: None
Problem Statement: Sometimes it may be necessary to know what day of the year it is on a particular day. However there is no direct command in SQLplus which would allow you to obtain an integer representing the day of the year (similar to DAY_OF_WEEK command).
Solution: This SQLplus query syntax calculates an integer representing the day of the year. This is accomplished by subtracting the start of the year from the start of the current day. write (cast('00:00' as timestamp format 'HH:MI') - cast('1-1' as timestamp format 'MM-DD'))/24:00+1 NOTE for users running AMS v6.x and above: DAY_OF_YEAR is now a standard function in SQL+. Keywords: DAY_OF_YEAR Day of the Year References: None
Problem Statement: This knowledge base article describes how to determine when an Aspen SQLplus iqtask.exe (TSK_IQ#) process is hung.
Solution: If one suspects an IQ external task is frozen, go to the Aspen InfoPlus.21 Administrator | Definition Records | ExternalTaskDef and look at the specific TSK_IQ#. Right click on it and go to Properties. Check the External Task Tab and see if there is a query currently processing in the queue for a lengthy amount of time. If there is, then this query may be written in a way that it does not run efficiently as it should process quickly. What can cause a query to hang? This procedure can also be accomplished programmatically. To check if a TSK_IQ process is stuck while processing a record you could write a query which calls the TASK_CURRENT() function twice (with a given amount of time in between the calls) like this: local a char(80); local b char(80); a = task_current('iq1'); wait 15; b = task_current('iq1'); write a; write b; text_compare(a,b); Then compare the output from each function call using the Text_Compare() function. If the same query takes longer than 30 seconds to complete (or whatever is a typical processing time for a query in your environment) you could then restart the TSK_IQ process in question or pursue other avenues to troubleshoot the problem. Keywords: frozen freeze crash hang References: None
Problem Statement: How do you insert data into IP21 that includes both CDT and CST timestamps?
Solution: The following example query reads data from a text file converts the CDT and CST time stamps to an ISO8601 format then inserts the data into IP21. ProcedureDef records must be created for each of the functions listed below. (month_number, utc_offset, format_iso) Example below includes: Query (ISO_Insert), Functions (month_number, utc_offset, format_iso) to be stored as ProcedureDef records Text file (data.txt), Example select statements showing the results. (ISO_Insert) local t1s char(32); local t1 timestamp; local val real; local loop int; loop = 1; for (select substring(line from 1 for 22) from 'c:\work\data.txt' where linenum = loop) do t1s= (select substring(line from 1 for 22) from 'c:\work\data.txt' where linenum = loop); val = (select substring(4 of line)from 'c:\work\data.txt' where linenum = loop); t1 = format_iso(t1s); write iso8601(t1); write val; insert into analogdata (ip_trend_time, ip_trend_value) values (t1,val); loop = loop + 1; end; (month_number) function month_number(m char(3)) return case m when 'JAN' then '01' when 'FEB' then '02' when 'MAR' then '03' when 'APR' then '04' when 'MAY' then '05' when 'JUN' then '06' when 'JUL' then '07' when 'AUG' then '08' when 'SEP' then '09' when 'OCT' then '10' when 'NOV' then '11' when 'DEC' then '12' end; end (utc_offset) function utc_offset(code) return case code when 'CDT' then '-05:00' when 'CST' then '-06:00' end; end (format_iso) function format_iso(t char(32)) return '20'||substring(t from 8 for 2)||'-'|| month_number(substring(t from 4 for 3))||'-'|| substring(t from 1 for 2)||'T'|| substring(2 of t)|| utc_offset(substring(3 of t)); end (data.txt) 27-Oct-02 01:00:00 CDT 01.00 27-Oct-02 01:15:00 CDT 01.15 27-Oct-02 01:30:00 CDT 01.30 27-Oct-02 01:45:00 CDT 01.45 27-Oct-02 01:59:00 CDT 01.59 27-Oct-02 01:00:00 CST 02.00 27-Oct-02 01:15:00 CST 02.15 27-Oct-02 01:30:00 CST 02.30 27-Oct-02 01:45:00 CST 02.45 27-Oct-02 01:59:00 CST 02.59 27-Oct-02 02:00:00 CST 03.00 27-Oct-02 02:15:00 CST 03.15 27-Oct-02 02:30:00 CST 03.30 Here are the results of select statements with IP21 standard, and ISO8601 timestamp format. select ip_trend_time, ip_trend_value from analogdata; select iso8601(ip_trend_time), ip_trend_value from analogdata; ISO8601(ip_trend_time) ip_trend_value 2002-10-27T08:30:00.000000Z 3.3000 2002-10-27T08:15:00.000000Z 3.1500 2002-10-27T08:00:00.000000Z 3.0000 2002-10-27T07:59:00.000000Z 2.5900 2002-10-27T07:45:00.000000Z 2.4500 2002-10-27T07:30:00.000000Z 2.3000 2002-10-27T07:15:00.000000Z 2.1500 2002-10-27T07:00:00.000000Z 2.0000 2002-10-27T06:59:00.000000Z 1.5900 2002-10-27T06:45:00.000000Z 1.4500 2002-10-27T06:30:00.000000Z 1.3000 2002-10-27T06:15:00.000000Z 1.1500 2002-10-27T06:00:00.000000Z 1.0000 ip_trend_time ip_trend_value 27-OCT-02 02:30:00.0 3.3000 27-OCT-02 02:15:00.0 3.1500 27-OCT-02 02:00:00.0 3.0000 27-OCT-02 01:59:00.0 2.5900 27-OCT-02 01:45:00.0 2.4500 27-OCT-02 01:30:00.0 2.3000 27-OCT-02 01:15:00.0 2.1500 27-OCT-02 01:00:00.0 2.0000 27-OCT-02 01:59:00.0 1.5900 27-OCT-02 01:45:00.0 1.4500 27-OCT-02 01:30:00.0 1.3000 27-OCT-02 01:15:00.0 1.1500 27-OCT-02 01:00:00.0 1.0000 Keywords: cdt cst utc iso8601 sqlplus insert timestamps References: None
Problem Statement: When using variable in the WHERE condition of a SELECT query to a remote table, error is encountered. Below is the error message encountered when querying Batch.21 database on Microsoft SQL Server. Below is the error message encountered when querying a database on MySQL database system. As can be seen above, the error message will differ depending on the database system the remote table is on and the ODBC driver used to connect to the database.
Solution: The query to the remote table need to be rewritten such that the variable is concatenated to the SQL query to be executed on the remote table. It will need to be executed using either the EXEC or EXECUTE command. For example, the following SQL query will result in error. DECLARE LOCAL TEMPORARY TABLE MODULE.BatchID(batch_id INT); INSERT INTO MODULE.BatchId SELECT DISTINCT batch_id FROM B21.AspenBatch.dbo.char_batch_data; FOR (SELECT batch_id id FROM MODULE.BatchId WHERE batch_id >= 5064) DO SELECT batch_id, char_id, char_inst, char_time, char_value, subbatch_id FROM B21.AspenBatch.dbo.char_batch_data WHERE batch_id = id; END DISCONNECT B21; However, when it is rewritten to the following, it can be executed without any error. DECLARE LOCAL TEMPORARY TABLE MODULE.BatchID(batch_id INT); LOCAL tmpStr CHAR(150); INSERT INTO MODULE.BatchId SELECT DISTINCT batch_id FROM B21.AspenBatch.dbo.char_batch_data; FOR (SELECT batch_id id FROM MODULE.BatchId WHERE batch_id >= 5064) DO tmpStr = 'SELECT batch_id, char_id, char_inst, ' || 'char_time, char_value, subbatch_id' || 'FROM AspenBatch.dbo.char_batch_data' || 'WHERE batch_id = ' || id; EXEC tmpStr ON B21; END DISCONNECT B21; Another way is to use MACRO. However this will require 2 separate SQL scripts to written with one calling upon the other using START command. Query 1: DECLARE LOCAL TEMPORARY TABLE MODULE.TEMP(tt TIMESTAMP, tv INT); INSERT INTO MODULE.TEMP SELECT TrendTime t, TrendValue v FROM MySQL.ProcessData.Tag1Data WHERE Trendvalue > 8; SELECT * FROM MODULE.TEMP; FOR (SELECT tv n FROM MODULE.TEMP) DO START 'C:\query2.sql', n; END DISCONNECT MySQL; Query 2: SELECT TrendTime t, TrendValue v FROM MySQL.ProcessData.Tag1Data WHERE Trendvalue = &1; DISCONNECT MySQL; Keywords: SQLplus remote table Invalid column name Unknown column References: None
Problem Statement: Aspen SQLplus reporting provides functionality to create reports which can be sent automatically by email to a recipient list. However, if you're not using Aspen SQLplus Reporting, it is possible to include the result of a query as part of an email by creating HTML code as result of the query and include it as part of the message body. This
Solution: provides example code on how to do this as well as a simple query which can be as template for more complex reports. This example uses the code onSolution 132567 to send email. Solution Pure HTML code for emails can be created on SQLplus by simple writing it as part of a variable which will hold the HTML body message for an email, for example: local htmlMessage; htmlMessage = '<html><head></head><body>' || CHR(10); --Message code here, htmlMessage = htmlMessage || '</body></html>'; The body of the message can actually include HTML code like: htmlMessage = htmlMessage || '<H3>Custom SQLplus Report</H3>' || CHR(10); htmlMessage = htmlMessage || '<P> <B>This is custom SQLplus report!</B>' || CHR(10); htmlMessage = htmlMessage || '<P> Created on pure HTML code' || CHR(10); htmlMessage = htmlMessage || '<BR> <B><I>(Adapt this to your need)</I></B>' || CHR(10); htmlMessage = htmlMessage || '<BR>' || CHR(10); It is possible to include the result of SQLquery SELECT statement using code like: For (Select Name, IP_Description from IP_AnalogDef where Name like 'ATCL%') Do htmlTable = htmlTable || '<tr>' || CHR(10); htmlTable = htmlTable || '<td>' || Name || '</td>' || CHR(10); htmlTable = htmlTable || '<td>' || IP_Description || '</td>' || CHR(10); htmlTable = htmlTable || '</tr>' || CHR(10); End Here each of the query row are used as part of a <tr><td>''</td></tr> element of HTML code. A complete simple query to send an email message as HTML can be used like: local msg, FSO, Fldr, File, RecentFile; local SMTPServer, Sender, Recipient, Subject, message, htmlMessage, htmlTable; Keywords: None References: None
Problem Statement: How can one delete unwanted informational messages from the query result? For example, someone wrote a query to insert data into four temporary tables. Each time when data is being inserted, the result window in the Query Writer says ## rows inserted, and this message will repeat four times since there are four temporary table to insert. Is it possible to eliminate this, and other similar messages from the query result area?
Solution: One can use the SET command to get rid of the informational messages from the result. By default, the SET LOG_ROWS is set to 1 which means you will see these messages. If you don''t want to see these messages, then you will need to change it to 0. For example, SET LOG_ROWS 0; will delete ALL informational messages in the result. This is documented in the SQLPlus Online Manual. The keyword to search for this information is log and then select LOG_ROWS. Keywords: SET LOG_ROWS verbose extra remark References: None
Problem Statement: After upgrading to version 3.0 and above of the Aspen Manufacturing Suite, some customers have reported that the IQ command can not be run simply by typing the command name at a command prompt. To execute IQ, it is necessary to first change to the directory containing the executable.
Solution: Include the %SETCIMCODE% directory (<Drive Letter>:\Progra~1\AspenTech\InfoPlus.21\db21\code) in the Path system environment variable on the system. On Windows NT: Right click on the My Computer icon on the desktop and select Properties. Then select the Environment tab. Under System Variables, double click on Path and add the following entry separated by a semi-colon. <Drive Letter>:\Progra~1\AspenTech\InfoPlus.21\db21\code On Windows 2000: Right click on the My Computer icon on the desktop and select Properties. On the Advanced tab, select Environment Variables. Under system variables, double click on Path and add the following entry separated by a semi-colon. <Drive Letter>:\Progra~1\AspenTech\InfoPlus.21\db21\code Keywords: command location References: None
Problem Statement: In an Aspen SQLplus script, variables are declared to hold TIMESTAMP values. However when the declared variables are used within a SELECT query, the following error message is received. Sometimes, you will get No rows selected message. However when the actual value is substituted into the query, results are returned as shown below.
Solution: This is caused by the way the variables are being declared. LOCAL tstart, tend TIMESTAMP; The above variables, tstart and tend, need to be declared individually. Even though the above would seem to be declaring both the variables, in actual fact, the variable tstart is not being declared with a data type at all. Hence tstart is a VARIANT variable and only tend is being declared to be a TIMESTAMP variable. To declare the variables, you can choose to LOCAL tstart TIMESTAMP, tend TIMESTAMP; or LOCAL tstart TIMESTAMP; LOCAL tend TIMESTAMP; Keywords: Failed to convert variant value to TIMESTAMP No rows selected References: None
Problem Statement: How do you add large groups of records to folders using SQLPlus?
Solution: There are two methods in which you could employ. Assumption: You have already created the records you would like to put into folders as well as the folders you wish to put them into. METHOD ONE: The first is straightforward and allows quick adding of records to folders. Assuming you have a folder created, called PlantA, you could execute the following query: Set Expand_Repeat 1; Insert Into planta(record_name) Select name From IP_AnalogDef Where name Like 'a%'; METHOD TWO: The second allows you the flexibility of using a text file in order to populate your folders. Step 1: Create a query that will select all the records that you would like to put into a folder. If you have more than one folder you want to insert into, you'll need to create more than one query for the records. For example, if you have a bunch of records that start with A that go into folder PlantA and another group of records that start with B that go into folder PlantB, you would follow the example below: Example: SET OUTPUT 'c:\Atags.txt'; SET COLUMN_HEADERS 0; SELECT name FROM IP_AnalogDef WHERE name like 'a%'; OR-- If you already have a text file with a list of the desired tag names; edit the file so it does not contain any headers. Example : Atags.txt ATCDAI ATCF101 ATCF102 ATCF103 ATCF201 ATCF202 ATCF301 ATCF302 ATCL101 ATCL102 ATCL103 ATCL201 ATCL301 ATCL302 ATCL401 Step 2 Write the following SQLPlus script Set Expand_Repeat 1; Insert Into planta (record_name) select substring(1 of line) from 'atags.txt' Keywords: folder tag INSERT References: None
Problem Statement: Sample Query that returns Engineering Unit name, its Occurence Value and its Selection Value.
Solution: The following query can be used to list all Engineering Units by name along with their Occurrence Number and Selection Value: select distinct ip_eng_units from ip_discretedef; select * from eng-units; select occnum as Occurrence number, occnum-1 as Selection Value, select_description as Engineering Unit Name from eng-units; Keywords: None References: None
Problem Statement: In Aspen SQLplus, the EXECUTE command allows for the execution of a general statement on a link such as: EXECUTE statement ON link_name; It is also possible to use a variable for the link_name in the execute statement so that one can loop through a list of servers.
Solution: The example query below would allow one to accomplish the above objective. Function getlink(dummy int) local fromlink char(16); For (select line aline from 'C:\Program Files\AspenTech\InfoPlus.21\db21\sql\links.dat' where substring(4 of line between ',') = 'MyServerName') DO return substring(4 of aline between ','); end end Macro mylink=getlink(1); local descript char(24); descript=EXEC 'SELECT NAME FROM IP_textDef' on &mylink; write descript; --Note links.dat resides in C:\Program Files\AspenTech\InfoPlus.21\db21\sql Keywords: link_name links.dat EXECUTE References: None
Problem Statement: How do I build a list of tags where the 'enable tag replication' choice has been checked?
Solution: The function GETREPLIC(RECID) displays the replication status of record.A The following query lists the name and replication status for all records defined by Ip_AnalogDef: SELECT NAME, GETREPLIC(RECID) FROM IP_AnalogDef GETREPLIC(RECID) returns 0 if replication for a point is not enabled and 1 if replication is enabled. Therefore to get a list of just the Ip_AnalogDef records with replication enabled, change the query to : SELECT NAME, GETREPLIC(RECID) FROM IP_AnalogDef where GETREPLIC(RECID) = 1; Finally, use the following query to get a list of all records with Replication enabled: SELECT NAME, GETREPLIC(RECID) FROM all_records where GETREPLIC(RECID) = 1; Keywords: None References: None
Problem Statement: While configuring a new ODBC connection using Aspen SQLplus driver, the next message is displayed:
Solution: Windows error 193 means the application is not a valid Win32 application. This means that current driver used is not the proper driver for application bit architecture. This can be caused either having the wrong driver file in System32 or SysWOW64 or by having driver path pointing to the wrong driver. For example a 64-bit SQLPlus ODBC connection, using ODBC Data Source Administrator (odbcad32.exe) located in [Drive]:\Windows\System32, verify that registry entries look like this: Using odbcad32.exe located in System32 folder should have [HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI\AspenTech SQLplus] Driver and Setup fields pointing to C:\Windows\System32\ip21odbc.dll Using odbc32.exe located in SysWOW64 folder you should verify the Driver and Setup fields located in key [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBCINST.INI\AspenTech SQLplus] point to C:\Windows\SysWOW64\ip21odbc.dll If you still receive the error messages after confirming the registry entries, check that ip21odbc.dll file has the proper architecture for the connection you want to use. You can use the Sigcheck.exe tool from Windows Sysinternal (http://technet.microsoft.com/en-us/sysinternals/bb897441.aspx ) to check for file architecture as follows: Sigcheck –a <file path> The 32-bit driver should be located in SysWOW64 folder while 64-bit driver should be located in System32 folder. Ensure that this is correct and move the proper file to the correct path. For further reference on 32-bit and 64-bit driver’s file location please consultSolution 120486. Keywords: System error code 193 SQLplus ODBC driver ip21obcd.dll 32-bit 64-bit References: None
Problem Statement: How do you convert an Aspen InfoPlus.21 timestamp to ISO8601 or LOCAL_ISO8601 format?
Solution: LOCAL_IS08601 timestamps are in the format of yyyy-mm-ddThh:mi:ss.ssssss+/-hh:mi where “+/-“ represents “+” or “-“ and hh:mi after “+/-“ represents an offset from UTC. In this case, yyyy-mm-ddThh:mi:ss.s represents local time. The letter T is required as well as the UTC offset +/-hh:mi. ISO8601 timestamps are in the format yyyy-mm-ddThh:mi:ss.ssssssZ and represent UTC time. The letters T and Z are required. Use the function LOCAL_ISO8601 to convert an InfoPlus.21 timestamp to LOCAL_ISO8601 format. For example, in the central time zone of the United States, the query local tmp timestamp; tmp = '18-JUN-14 07:00:00'; write LOCAL_ISO8601(tmp); returns 2014-06-18T07:00:00.000000-05:00 The function LOCAL_ISO8601 has an optional second argument that specifies the number of digits for fractions of seconds. For example, in the central time zone of the United States, the query local tmp timestamp; tmp = '18-JUN-14 07:00:00'; write LOCAL_ISO8601(tmp,1); returns 2014-06-18T07:00:00.0-05:00 Use the function ISO8601 to convert an InfoPlus.21 timestamp to LOCAL_ISO8601 format. For example, in the central time zone of the United States, the query local tmp timestamp; tmp = '18-JUN-14 07:00:00'; write ISO8601(tmp); returns 2014-06-18T12:00:00.000000Z The function ISO8601 has an optional second argument that specifies the number of digits for fractions of seconds. For example, in the central time zone of the United States, the query local tmp timestamp; tmp = '18-JUN-14 07:00:00'; write ISO8601(tmp,1); returns 2014-06-18T12:00:00.0Z Keywords: References: None
Problem Statement: Which operator is better to use when pulling a text tag containing the operator''s name into the LastOperatorName variable? If you want to evaluate a text tag with SQLPlus, and you want to pull a text tag containing the operator''s name into the LastOperatorName variable which is char(24). Should you use the LIKE operator or the TRIM operator? Both operators will work, but if you want to fully identify with an = instead of a ''''like'''' where more than one case may be true, then you are better of using the TRIM operator. Example case Statement Case When LastOperatorName = ''''Randy Judd'''' then LastOperatorNum = 1231; When LastOperatorName = ''''Larry Pentz'''' then LastOperatorNum = 1232; When LastOperatorName = ''''Bill Payne'''' then LastOperatorNum = 1233; When LastOperatorName = ''''Steve Pentz'''' then LastOperatorNum = 1234; When LastOperatorName = ''''Joe Pentz'''' then LastOperatorNum = 1235; When LastOperatorName = ''''Mitch Francis'''' then LastOperatorNum = 1236; When LastOperatorName = ''''Kelly Flinton'''' then LastOperatorNum = 1237; When LastOperatorName = ''''Steve Robinson'''' then LastOperatorNum = 1238; else LastOperatorNum = -1; end
Solution: try using the TRIM statement looking something like: When TRIM(LastOperatorName) = ''Randy Judd'' then LastOperatorNum = 1231; The TRIM function removes characters from the start or end or both of a character string. For more detail about the TRIM statement, check the online help file in SQLPlus. Keywords: trim sql References: None
Problem Statement: The following entry comes from the Stored procedures in ODBC topic in the SQLplus help files. However, the help file does not include a specific example. Stored procedures are useful in ODBC. The ODBC standard only allows for execution of a single SQLplus statement at a time. Stored procedures allow that single statement to be a call of a Stored Procedure. The Stored Procedure can then make use of the procedural statements of SQLplus. If you are using actual ODBC statements, you should use the ODBC syntax for Stored Procedure calls, as follows: {call procedure(params)} for a PROCEDURE and {?=call function(params)} for a FUNCTION. NOTE: ADO (ActiveX Data Objects) can hide this syntax if you set the CommandType to adCmdStoredProc.
Solution: The following is a Visual Basic example that calls a stored procedure with a parameter. Dim adoCon As New ADODB.Connection Dim adoRefer As New ADODB.Command Dim adoRecords As ADODB.Recordset adoCon.Open (SQLplus on yelm) adoRefer.ActiveConnection = adoCon adoRefer.CommandText = refer adoRefer.CommandType = adCmdStoredProc adoRefer.Parameters(0) = definitiondef Set adoRecords = adoRefer.Execute() Here are some examples of timestamp parameters. These examples use the following SQLplus stored procedure: procedure get_history(n char(32), t1 timestamp, t2 timestamp) select ts, value from history where name = n and ts between t1 and t2 and request=4; end Example 1 uses the InfoPlus.21 timestamp format: Dim adoCon As New ADODB.Connection Dim adoRecords As ADODB.Recordset adoCon.Open (DRIVER={AspenTech SQLPlus};HOST=myhost) Set adoRecords = adoCon.Execute( _ {call get_history('atcai', '10-oct-02 20:14', '10-oct-02 20:18')}) Do While Not adoRecords.EOF Debug.Print adoRecords!ts, adoRecords!Value adoRecords.MoveNext Loop Example 2 uses the ODBC standard timestamp format: Dim adoCon As New ADODB.Connection Dim adoRecords As ADODB.Recordset adoCon.Open (DRIVER={AspenTech SQLPlus};HOST=myhost) Set adoRecords = adoCon.Execute( _ {call get_history('atcai', timestamp'2002-10-10 20:14:00', timestamp'2002-10-10 20:18:00')}) Do While Not adoRecords.EOF Debug.Print adoRecords!ts, adoRecords!Value adoRecords.MoveNext Loop Example 3 uses Visual Basic Date variables: Dim adoCon As New ADODB.Connection Dim adoRecords As ADODB.Recordset Dim t1 As Date Dim t2 As Date t1 = 10-oct-02 20:13 t2 = 10-oct-02 20:18 adoCon.Open (DRIVER={AspenTech SQLPlus};HOST=myhost) Dim adoCommand As New ADODB.Command adoCommand.ActiveConnection = adoCon adoCommand.CommandText = get_history adoCommand.CommandType = adCmdStoredProc adoCommand.Parameters(0).Value = atcai adoCommand.Parameters(1).Type = adDBTimeStamp adoCommand.Parameters(1).Value = t1 adoCommand.Parameters(2).Type = adDBTimeStamp adoCommand.Parameters(2).Value = t2 Set adoRecords = adoCommand.Execute Do While Not adoRecords.EOF Debug.Print adoRecords!ts, adoRecords!Value adoRecords.MoveNext Loop Keywords: ODBC stored procedure parameter timestamp References: None
Problem Statement: A query run using Aspen SQLplus selecting data from a MIcrosoft SQL Server database via an ODBC link returns null values from a column with varchar(max) data type even though the same query returns (correct) non-null values when executed using the SQL Server Management Studio.
Solution: Changing the problem column's data type to varchar(256) results in the non-null values coming across into Aspen SQLplus. The problem will also be seen if you are using the Microsoft SQL Server Native Client ODBC driver. If you switch to the Microsoft SQL Server ODBC Driver instead, columns with varchar(max) data type will now be readable. The SQL Server Native driver is used for applications which are more tightly linked to specific SQL Server functions and features, and AspenTech recommends that you do not use it with Aspen SQLplus. Keywords: blank empty space statement text field References: None
Problem Statement: Using Aspen SQLplus, how can you determine which record and field activated a query?
Solution: In Aspen SQLplus, there are two functions which store information about the 'activator record&field': ACTIVATION_RECORD and ACTIVATION_FIELD. The following is the syntax to be used in order to get the ACTIVATION_RECORD and ACTIVATION_FIELD of a QueryDef record. For example consider I have a QueryDef record named 'activation' that is being activated by COS using tag 'atcai ip_input_value'. Output is shown considering this example for each of the following functions ACTIVATION_RECORD - Gives the record that is getting activated (activation) ACTIVATION_FIELD - Provides the record name and the occurrence number with which record is activated (activation 1 wait_for_cos_field) *ACTIVATION_FIELD - Output the name of the tag within wait for cos field (atcai ip_input_value) **ACTIVATION_FIELD - Value of the tag referenced in the wait for cos field (10) If applicable, these return the record name and the contents of the activating field, otherwise it returns a NULL value. Keywords: activation record field SQL+ References: None
Problem Statement: When running a query in the X-Windows SQLplus Interface and querying large amounts of data, the Memory Full error message will be encountered. What can be done to resolve this?
Solution: There are a number of things that can be tried to remedy this situation. The first suggestion is to limit the query more by adding additional where conditions. By shortening a time range, for instance, to narrow down the amount of data being obtained. If changing the query is not an option, another suggestion is to increase the VMS Page File Quota. On VMS, the SQLplus Memory Full message is given when the SQLplus process reaches its Page File Quota. The Page File Quota is a VMS limit on the virtual memory that a process can use. There is a default Page File Quota for each user and you can set a Page File Quota when you run a process. You could try increasing the Page File Quota for the user specified for the SQLplus server. You can find the name of the user for the service using a command like: ucx show service sqlp10014/full You can change the page file quota for this user using the authorize program: set def sys$system run authorize UAF> mod nancy/pgflquo=60000 (where nancy is the user name) If neither of the previous suggestions help, the last suggestion is to define a new logical name. For the SQLplus X-Windows interface (like the Microsoft Windows interface), there is also the SETCIMSQLMAX logical name. This is the maximum number of bytes that can be held in the output of the windows interface. You could try defining this as below: define SETCIMSQLMAX 20000000 Keywords: memory full query SETCIMSQLMAX Page File Quota References: None
Problem Statement: How can one use Aspen SQLplus to access a record's references? Is there a way to get the references like what one does by right-clicking a record in the Aspen InfoPlus.21 Administrator and choosing 'Show
Solution: This is an example of using the NXTREFER function to display references to IP_AnalogDef and all tags associated with IP_AnalogDef. --This feature is available in SQLplus using the NXTREFER function. --For example, the following returns all the references to IP_AnalogDef: local search_for record, found_field field; search_for = 'IP_AnalogDef'; found_field = nxtrefer(NULL, search_for); while found_field is not null do write found_field; found_field = nxtrefer(found_field, search_for); end Keywords: reference SQLplus Show References: s...'?
Problem Statement: Using Aspen SQLplus, an attempt to read the lines from a unicode or multibyte format file will fail with the following standard syntax:
Solution: Use this modified syntax when reading lines from files that are not plain text files: Keywords: chaos characters garbage extended ANSII ASCII failure for select line system write References: None
Problem Statement: What is the SECONDS field in the Aspen SQLPlus AGGREGATES pseudo table? SECONDS is not documented in Aspen SQLPlus on-line help.
Solution: SECONDS is the number of seconds since January 1, 1970, for the aggregates time stamp. Keywords: SECONDS AGGREGATES References: None
Problem Statement: How do you subtract timestamps in query?
Solution: Use the function DELTA_TIME as illustrated in the following example: local t1 timestamp, t2 timestamp; local t3 integer; t2=(select ip_trend_time from atcai where occnum=1); t1=(select ip_trend_time from atcai where occnum=20); t3=delta_time(t2, t1); update duration set ip_input_value=t3; The result of this calculation is the number of tenths of seconds between the two timestamps. The query updates the record duration defined by IP_DescreteDef with the delta time. If you want to see the result in tenths of seconds as an integer, such as 342800, then set the field IP_VALUE_FORMAT in the record duration to an integer format field such as I11. If you would like to see the result reformatted as a Delta Time, such as 09:31:20, then set the field IP_VALUE_FORMAT in the record duration to DT14. If you want to see the result as a whole number of minutes, such as 571, set the field 'IP_VALUE_FORMAT in the record duration to I6 and change the query to divide the delta time by 600: t3=delta_time(t2, t1)/600; Keywords: References: None