Hi, Thanks for your earlier responses.... See, i have one more problem, like i want to retrive the first 4000 characters of the long datatype, with out using the pl sql code. i just wrote a function like // create or replace function getlong1( p_rowid in rowid) return varchar2 as l_data long; begin select remarks into l_data from cr_claimheader where rowid = p_rowid; return substr( l_data, 1, 4000 ); end; // to get the first 4000 chars of remarks i am calling this function like // select getlong1(rowid) from cr_claimheader where headersrno = 4005 // Fine, this is working fine when i have the data with less than 4000 chars. in the particular remarks (long datatype) column. suppose if the length of data increase means, it returns just null... how can i solve this problem in a simple manner... Its very urgent... Thanks and bye regards Periasamy
In 8i and up, this will work: [email protected]> create or replace function getlong( p_tname in varchar2, 2 p_cname in varchar2, 3 p_rowid in rowid ) return varchar2 4 as 5 l_cursor integer default dbms_sql.open_cursor; 6 l_n number; 7 l_long_val varchar2(4000); 8 l_long_len number; 9 l_buflen number := 4000; 10 l_curpos number := 0; 11 begin 12 dbms_sql.parse( l_cursor, 13 'select ' || p_cname || ' from ' || p_tname || ' where rowid = :x', 14 dbms_sql.native ); 15 dbms_sql.bind_variable( l_cursor, ':x', p_rowid ); 16 17 dbms_sql.define_column_long(l_cursor, 1); 18 l_n := dbms_sql.execute(l_cursor); 19 20 if (dbms_sql.fetch_rows(l_cursor)>0) 21 then 22 dbms_sql.column_value_long(l_cursor, 1, l_buflen, l_curpos , 23 l_long_val, l_long_len ); 24 end if; 25 dbms_sql.close_cursor(l_cursor); 26 return l_long_val; 27 end getlong; 28 / Function created. [email protected]> [email protected]> drop table t; Table dropped. [email protected]> create table t ( x long ); Table created. [email protected]> [email protected]> declare 2 data long default rpad( '*', 32000, '*' ); 3 begin 4 insert into t values ( data ); 5 end; 6 / PL/SQL procedure successfully completed. [email protected]> [email protected]> select getlong('T', 'X', rowid) from t; GETLONG('T','X',ROWID) ------------------------------------------------------------------------------------------ ----------------------------------------- **************************************************.....
Reviews | |||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Thanks a lot June 21, 2001 - 3pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Periasamy Subramaniyan from Bulgaria
We planned to do some alteration... very good timely help... Thanks a lot...
How does this work... September 20, 2002 - 5am Central time zone
Bookmark | Bottom | Top
Reviewer:
Subhrajyoti Paul from Mumbai, India
Tom, why does this happen... [email protected]> create table t1 2 (x int, y long); Table created. [email protected]> create table t2 2 (x int, y varchar2(4000)); Table created. [email protected]> insert into t1 values(1,'abcde'); 1 row created. [email protected]> commit; Commit complete. [email protected]> insert into t2 select * from t1; insert into t2 select * from t1 * ERROR at line 1: ORA-00997: illegal use of LONG datatype [email protected]> begin 2 for a in (select * from t1) loop 3 insert into t2 values(a.x, a.y); 4 end loop; 5 end; 6 / PL/SQL procedure successfully completed. [email protected]> select * from t2; X ---------- Y -------------------------------------------------------------------------------- 1 abcde When I am doing a Select-Insert, the datatype conversion gives an error but when i am using values clause, it works fine... Why does this happen...? Followup September 20, 2002 - 8am Central time zone: because the documentation says it will? LONGS cannot be used in an insert into. Put more then 32k in the long and it'll fail miserably in plsql as well. USE CLOBS, do not, repeat, do not, use a long.
But Tom, what happens in this case, why PL/SQL succeeds in this case... September 20, 2002 - 8am Central time zone
Bookmark | Bottom | Top
Reviewer:
Subhrajyoti Paul
Tom, I have a table which was created with a long column and has around 1 M records in it... I want to modify the column type to long. In this process, I am saving the data into temp table with varchar2 column and modifying the main table. So, I have to do the conversion. I used the loop to do that. I just wanted to know why this failed in PURE SQL while it worked in PL/SQL loop.... Thanks
Correction in the last review September 20, 2002 - 8am Central time zone
Bookmark | Bottom | Top
Reviewer:
Subhrajyoti Paul
Sorry, I want to modify the column type to VARCHAR2, not long as I wrongly wrote in the previous review...
Excellent work February 5, 2003 - 7am Central time zone
Bookmark | Bottom | Top
Reviewer:
Chandra S.Reddy from India
Great stuff from very great Tom.
Is this againest to the coding Standards. February 11, 2003 - 5am Central time zone
Bookmark | Bottom | Top
Reviewer:
Sindhu from Alaska
Tom, Thanks for the code you provided. In one situation I have to send a value from LONG datatyped column to application through Stored Proc as a OUT parameter. SQL> desc t_comment Name Null? Type ----------------------------------------------------- ----- T_NUMBER NOT NULL CHAR(13) USER_COMMENT LONG In a stored Proc I implemented like. procedure sp_xx(in_cTNo IN NUMBER, l_var OUT VARCHAR2) IS select sf_pc_cmt_FNC(in_cTNo) into l_var from t_comment; end; the function "sf_pc_cmt_FNC" is exactly look like the code snippet you provided. But when I implemented this snippent, My client refused to accept this.Below are the client's comments. <<< his comments >>> " I think that there is a misunderstand information for " item #10:Use Oracle Packages and functions for generalized common routines frequently executed" in the Guideline for Oracle Database Application Development for PL/SQL Stored Procedures document. The dbms_sql package can be directory access by the store procedure if the code related to the reference tables and it is not heavy accessing by the applications. We have to discuss the correct the usage of the dbms_sql package. " <<<<>>>> The item#10 is below. . Use Oracle packages and functions for generalized common routines frequently executed since code is efficient. Also, use user-written packages and functions for application shared code. Before you write a user-written package or function, first verify it doesnt already exist with Oracle. One good reason to do this is because when Oracle upgrades to a new release they upgrade their own packages and functions for you. If you write them yourself, you may have more maintenance when converting to a new release. < item #10 over...> Is Oracle providing any package(DBMS_?) for this conversion. Please let me know the way to resolve this and why client is refused to accept this SP. Thanks in advance. Followup February 11, 2003 - 8am Central time zone: I don't understand this at all -- sorry? What exactly is their issue? My routine returns just the first 4000 characters of a long --regardless of how long the long is. IF your longs are all 32k or less your code should be: is l_tmp long; begin select long_col into l_tmp from t_comment where ...; return substr( l_tmp, 1, 4000 ); end; if they EXCEED 32k for just one row -- you have to use dbms_sql in plsql
Converting Long to Varchar2 with minimal redo/undo. June 24, 2003 - 12am Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader from Japan
Hi Tom, What method would you suggest for converting a long column to varchar2. Our table has about 20 million rows in it. We want to generate least number of archive logs for this. Sqlplus copy command and PL/SQL insert cannot be used as we cannot use APPEND hint with them(We tried creating a new table and then inserting into it,we had planed to rename the original table.We plan to create index's with nologging after the table is created.) The orginal table and the new table are as given below. CREATE TABLE AD_MESSAGES (MSG_NUM NUMBER(5) DEFAULT 0 NOT NULL ,MSG_CONTENT LONG NOT NULL ,MSG_TS DATE NOT NULL ,AD_DOC_DOC_NUM NUMBER(5) NOT NULL ,AD_DOC_ADC_CONTAINER_NUM NUMBER(12) NOT NULL ,AML_MSG_LEVEL VARCHAR2(10) NOT NULL ) TABLESPACE AD_MESSAGES / CREATE TABLE TEMP_AD_MESSAGES (MSG_NUM NUMBER(5) DEFAULT 0 NOT NULL ,MSG_CONTENT VARCHAR2(4000) NOT NULL ,MSG_TS DATE NOT NULL ,AD_DOC_DOC_NUM NUMBER(5) NOT NULL ,AD_DOC_ADC_CONTAINER_NUM NUMBER(12) NOT NULL ,AML_MSG_LEVEL VARCHAR2(10) NOT NULL ) TABLESPACE AD_MESSAGE / ALTER TABLE TEMP_AD_MESSAGES NOLOGGING / DECLARE CURSOR cur_msg IS SELECT msg_num ,msg_content ,msg_ts ,ad_doc_doc_num ,ad_doc_adc_container_num ,aml_msg_level FROM AD_MESSAGES; -- ln_msg_num TEMP_AD_MESSAGES.MSG_NUM%TYPE; lc_msg_content TEMP_AD_MESSAGES.MSG_CONTENT%TYPE; lc_msg_ts TEMP_AD_MESSAGES.MSG_TS%TYPE; ln_doc_num TEMP_AD_MESSAGES.AD_DOC_DOC_NUM%TYPE; ln_container_num TEMP_AD_MESSAGES.AD_DOC_ADC_CONTAINER_NUM%TYPE; lc_msg_lvl TEMP_AD_MESSAGES.AML_MSG_LEVEL%TYPE; -- ln_count NUMBER(20) := 0; ln_step NUMBER(10) := 0; BEGIN ln_step := 1; OPEN cur_msg; LOOP ln_step := 2; FETCH cur_msg INTO ln_msg_num,lc_msg_content,lc_msg_ts,ln_doc_num,ln_container_num,lc_msg_lvl; EXIT WHEN cur_msg%NOTFOUND; ln_step := 3; INSERT /*+ APPEND */ INTO TEMP_AD_MESSAGES (MSG_NUM ,MSG_CONTENT ,MSG_TS ,AD_DOC_DOC_NUM ,AD_DOC_ADC_CONTAINER_NUM ,AML_MSG_LEVEL) VALUES (ln_msg_num ,lc_msg_content ,lc_msg_ts ,ln_doc_num ,ln_container_num ,lc_msg_lvl); ln_step := 4; IF ln_count > 100000 THEN COMMIT; ln_count := 0; END IF; ln_step := 5; ln_count := ln_count + 1; END LOOP; ln_step := 6; COMMIT; EXCEPTION WHEN OTHERS THEN dbms_output.put_line('Error in script. '||SQLERRM ||'ln_step '||ln_step); END; This is generating heavy archive logs, as APPEND is ignored with 'values' clause of insert. What would be your suggestion for this ? Thanks, Followup June 24, 2003 - 7am Central time zone: there will be no way to do this without unloading the data and then using sqlldr direct path to reload it or writing an OCI program in C using the direct path apis. create table as select --> won't work, longs. insert /*+ append */ as select --> won't work, longs. copy -> won't work, conventional path. code you write -> won't work, conventional path. so, maybe you can unload to a flat file (see asktom.oracle.com/~tkyte for unloaders) and reload with sqlldr
use a function to return a varchar2 June 24, 2003 - 9am Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader from Japan
Hi Tom, Do you see any problem, if we create a function which takes a rowid parameter and then selects the long column into a varchar2 variable from the rowid and returns this varchar2 variable. After this we just use this function in the insert statement. For eg. insert /*+ APPEND */ into TEMP_AD_MESSAGES select MSG_NUM ,my_to_varchar_func(rowid),MSG_TS ,AD_DOC_DOC_NUM,AD_DOC_ADC_CONTAINER_NUM,AML_MSG_LEVEL from ad_messages; We know that our long field has less than 2000 bytes of length. In test case atleast its appearing to use direct mode insert. Thanks, Followup June 25, 2003 - 9am Central time zone: It would seem to me that in order to use append, you must be worried about gobs of UNDO. If you are worried about gobs of undo -- you must have TONS of data. If you have tons of data, calling a function once per row is going to be slow, I'd rather deal with the UNDO.
How to deal with long for this query June 24, 2003 - 6pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Yong from CA, USA
Tom, Search_condition is long type in Oracle data dictionary. How can I construct a query to get the following results without set a condition on a long type SQL> select constraint_name from user_constraints 2 where search_condition not like '%NOT NULL'; where search_condition not like '%NOT NULL' * ERROR at line 2: ORA-00932: inconsistent datatypes: expected NUMBER got LONG Followup June 25, 2003 - 11am Central time zone: ops$tkyte@ORA920LAP> create or replace function get_search_condition( p_cons_name in varchar2 ) return varchar2 2 authid current_user 3 is 4 l_search_condition user_constraints.search_condition%type; 5 begin 6 select search_condition into l_search_condition 7 from user_constraints 8 where constraint_name = p_cons_name; 9 10 return l_search_condition; 11 end; 12 / Function created. ops$tkyte@ORA920LAP> ops$tkyte@ORA920LAP> select constraint_name 2 from user_constraints 3 where get_search_condition(constraint_name) like '%NOT NULL%'; CONSTRAINT_NAME ------------------------------ SYS_C004792 SYS_C004794 SYS_C004181 ....
June 25, 2003 - 4pm Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
Tom, Great! thanks,
I get error when I try to use your function January 15, 2004 - 3pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Sonali Kelkar from waltham, ma
When I do this I get error create or replace function get_text( p_VIEW_NAME in varchar2 ) return varchar2 is l_text user_views.text%type; begin select text into l_text from user_views where view_name = p_VIEW_NAME; return l_text; end; / show errors select view_name from user_views where get_text(view_name) like '%Company_ID%'; ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at "xyz.GET_TEXT", line 10 Followup January 16, 2004 - 12am Central time zone: well -- a major issue with this is that varchar2 is always limited to 4,000 characters in SQL -- returning a view > 4,000 characters is going to blow up here. You would have to: where do_a_like_on_view_text( view_name, '%Company_ID%' ) > 0 and code more like this: is return_val number := 0; begin for x in ( select text ... ) loop if ( x.text like p_inputs ) then return_val :=1; end if; end loop; return return_val; end;
how to convert from long datatype to varchar April 21, 2004 - 4am Central time zone
Bookmark | Bottom | Top
Reviewer:
diwakar from Malaysia
if i want to put like in long data type it gives error so how to do it... thanks Followup April 21, 2004 - 8pm Central time zone: look up one review where I show you that you need to use a function.
CONVERSION FROM LONG TO VARCHAR@ April 21, 2004 - 11pm Central time zone
Bookmark | Bottom | Top
Reviewer:
diwakar from Malaysia
Hi, I have seen ur function but it also gives the following error:- select distinct trigger_name from user_triggers where FNC_getlong(trigger_name,'%BATCH%') > 0; where FNC_getlong(trigger_name,'%BATCH%') > 0 * ERROR at line 2: ORA-20001: ORA-06502: PL/SQL: numeric or value error ORA-06512: at "SEALINER_PH2B.FNC_GETLONG", line 20 ORA-06512: at line 1 As u can see i'm still getting error and the function is CREATE OR REPLACE FUNCTION Fnc_Getlong( p_tRG_name IN VARCHAR2,p_inputs VARCHAR2) RETURN NUMBER IS return_val NUMBER := 0; BEGIN FOR x IN ( SELECT trigger_body FROM user_triggers WHERE trigger_name =p_tRG_name ) LOOP IF ( x.trigger_body LIKE p_inputs ) THEN return_val :=1; END IF; END LOOP; RETURN return_val; EXCEPTION WHEN OTHERS THEN RAISE_APPLICATION_ERROR(-20001,SQLERRM); END; / can u pls help me in this Thanks in Advance Followup April 22, 2004 - 7am Central time zone: is your long >32k
conversion from long to varchar2 April 22, 2004 - 10pm Central time zone
Bookmark | Bottom | Top
Reviewer:
diwakar
Yes it is greater than 32K Followup April 23, 2004 - 11am Central time zone: see above: ..... IF your longs are all 32k or less your code should be: is l_tmp long; begin select long_col into l_tmp from t_comment where ...; return substr( l_tmp, 1, 4000 ); end; if they EXCEED 32k for just one row -- you have to use dbms_sql in plsql ................ however if you have need to search a long, you really ought be looking at a text index: create index t_idx on t(long_column) indextype ctxsys.context; and then select * from t where contains( long_column, 'foobar' ) > 0;
Awesome April 15, 2005 - 8pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Paul Ramos from Santa Barbara, CA USA
Your team rules!!! This site is the cure for the common headache.
convert LONG to Varchar2 - THANKS! May 11, 2005 - 1am Central time zone
Bookmark | Bottom | Top
Reviewer:
A.Bozrikov from Lemesos, Cyprus
I had to convert a dozen of tables having LONG columns (for no reason, as all strings in LONG columns were shorter than 2k!) to VARCHAR2 columns. Tom Kyte's suggestions to others were very helphul, thanks! I did not even have to ask anything.
search long July 15, 2005 - 5am Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
Hi The procedure you used to find search_condition in user_constrains works fine. Is there a way to build a generic procedure/function to search long columns? Such as select * from user_views where search_long(text) like '%XXX%' select * from user_constraints where search_long(search_condition) like '%XXX%' THANKS Followup July 15, 2005 - 8am Central time zone: Here is an excerpt from the book I'm working on now that deals with this question:A question that arises frequently however is - what about the data dictionary in Oracle? It is littered with LONG columns and this makes using them (the dictionary columns) problematic. For example, to find all VIEWS that contain the text 'HELLO' is not possible: ops$tkyte@ORA10G> select * 2 from all_views 3 where text like '%HELLO%'; where text like '%HELLO%' * ERROR at line 3: ORA-00932: inconsistent datatypes: expected NUMBER got LONG This is not limited to just the ALL_VIEWS view, there are many views: ops$tkyte@ORA10G> select table_name, column_name 2 from dba_tab_columns 3 where data_type in ( 'LONG', 'LONG RAW' ) 4 and owner = 'SYS' 5 and table_name like 'DBA%'; TABLE_NAME COLUMN_NAME ------------------------------ ------------------------------ DBA_VIEWS TEXT DBA_TRIGGERS TRIGGER_BODY DBA_TAB_SUBPARTITIONS HIGH_VALUE DBA_TAB_PARTITIONS HIGH_VALUE DBA_TAB_COLUMNS DATA_DEFAULT DBA_TAB_COLS DATA_DEFAULT DBA_SUMMARY_AGGREGATES MEASURE DBA_SUMMARIES QUERY DBA_SUBPARTITION_TEMPLATES HIGH_BOUND DBA_SQLTUNE_PLANS OTHER DBA_SNAPSHOTS QUERY DBA_REGISTERED_SNAPSHOTS QUERY_TXT DBA_REGISTERED_MVIEWS QUERY_TXT DBA_OUTLINES SQL_TEXT DBA_NESTED_TABLE_COLS DATA_DEFAULT DBA_MVIEW_ANALYSIS QUERY DBA_MVIEW_AGGREGATES MEASURE DBA_MVIEWS QUERY DBA_IND_SUBPARTITIONS HIGH_VALUE DBA_IND_PARTITIONS HIGH_VALUE DBA_IND_EXPRESSIONS COLUMN_EXPRESSION DBA_CONSTRAINTS SEARCH_CONDITION DBA_CLUSTER_HASH_EXPRESSIONS HASH_EXPRESSION 23 rows selected. that are affected by this. So, what is the solution? If you want to make use of these columns in SQL - we'll need to convert them to a SQL friendly type. We can use a user defined function for doing so. This demonstrates how to accomplish a "long substr" function that will allow you to effectively convert any 4000 bytes of a LONG type into a varchar2, for use with SQL. When we are done, we'll be able to query: ops$tkyte@ORA10G> select * 2 from ( 3 select owner, view_name, 4 long_help.substr_of( 'select text 5 from dba_views 6 where owner = :owner 7 and view_name = :view_name', 8 1, 4000, 9 'owner', owner, 10 'view_name', view_name ) substr_of_view_text 11 from dba_views 12 where owner = user 13 ) 14 where upper(substr_of_view_text) like '%INNER%' 15 / Meaning, we converted the first 4000 bytes of the VIEW_TEXT column from LONG to a VARCHAR2 and can now use a predicate on it. Using the same technique, you would be able to implement your own instr, like and such for LONG types as well. In this book, I'll only be demonstrating how to get the substring of a LONG type. The package we will implement has the following specification: ops$tkyte@ORA10G> create or replace package long_help 2 authid current_user 3 as 4 function substr_of 5 ( p_query in varchar2, 6 p_from in number, 7 p_for in number, 8 p_name1 in varchar2 default NULL, 9 p_bind1 in varchar2 default NULL, 10 p_name2 in varchar2 default NULL, 11 p_bind2 in varchar2 default NULL, 12 p_name3 in varchar2 default NULL, 13 p_bind3 in varchar2 default NULL, 14 p_name4 in varchar2 default NULL, 15 p_bind4 in varchar2 default NULL ) 16 return varchar2; 17 end; 18 / Package created. Note that one line 2, we've specified AUTHID CURRENT_USER. This makes the package run as the invoker, with all roles and grants in place. This is important for two reasons. Firstly, we'd like the database security to no be subverted - this package will only return substrings of columns you (the invoker) is allowed to see. Secondly, we'd like to install this package once in the database and have its functionality available for all to use - using invokers rights allows us to do that. If we used the default security model of PLSQL, definer rights, the package would run with the privileges of the owner of the package. Meaning it would only be able to see data the owner of the package could see - which does not include the set of data the invoker is allowed to see. The concept behind the function SUBSTR_OF is to take a query that selects at most one row and one column - the LONG value we are interested in. SUBSTR_OF will parse that query if needed, bind any inputs to it and fetch the results programmatically, returning the necessary piece of the LONG value. The package body, the implementation, begins with two global variables. The G_CURSOR variable holds a persistent cursor open for the duration of our session. This is to avoid having to repeatedly open and close the cursor and to avoid parsing SQL more than we need to. The second global variable, G_QUERY, is used to remember the text of the last SQL query we've parsed in this package. As long as the query remains constant, we'll just parse it once. So, even if we query 5,000 rows in a query - as long as the SQL query we pass to this function doesn't change, we'll only have one parse call: ops$tkyte@ORA10G> create or replace package body long_help 2 as 3 4 g_cursor number := dbms_sql.open_cursor; 5 g_query varchar2(32765); 6 Next in this package is a private function, BIND_VARIABLE, that we'll use to bind inputs passed to us by the caller. We implemented this as a separate private procedure only to make life easier - we want to bind only when the input name is NOT NULL. Rather than perform that check 4 times in the code for each input parameter - we do it once in this procedure: 7 procedure bind_variable( p_name in varchar2, p_value in varchar2 ) 8 is 9 begin 10 if ( p_name is not null ) 11 then 12 dbms_sql.bind_variable( g_cursor, p_name, p_value ); 13 end if; 14 end; 15 Next is the actual implementation of SUBSTR_OF in the package body, it begins with function declaration from the package specification and the declaration for some local variables. L_BUFFER will be used to return the value and L_BUFFER_LEN will be used to hold the length returned by an Oracle supplied function: 16 17 function substr_of 18 ( p_query in varchar2, 19 p_from in number, 20 p_for in number, 21 p_name1 in varchar2 default NULL, 22 p_bind1 in varchar2 default NULL, 23 p_name2 in varchar2 default NULL, 24 p_bind2 in varchar2 default NULL, 25 p_name3 in varchar2 default NULL, 26 p_bind3 in varchar2 default NULL, 27 p_name4 in varchar2 default NULL, 28 p_bind4 in varchar2 default NULL ) 29 return varchar2 30 as 31 l_buffer varchar2(4000); 32 l_buffer_len number; 33 begin Now, the first thing our code does is a sanity check on the P_FROM and P_FOR inputs. P_FROM must be a number greater than or equal to 1 and P_FOR must be between 1 and 4000 - just like the built-in function SUBSTR: 34 if ( nvl(p_from,0) <= 0 ) 35 then 36 raise_application_error 37 (-20002, 'From must be >= 1 (positive numbers)' ); 38 end if; 39 if ( nvl(p_for,0) not between 1 and 4000 ) 40 then 41 raise_application_error 42 (-20003, 'For must be between 1 and 4000' ); 43 end if; 44 Next, we'll check to see if we are getting a new query that needs to be parsed. If the last query we parsed is the same as the current query - we can skip this step. It is very important to note that on line 47 we are verifying that the P_QUERY passed to us is just a SELECT - we will use this package only to execute SQL SELECT statements, this check validates that for us: 45 if ( p_query <> g_query or g_query is NULL ) 46 then 47 if ( upper(trim(nvl(p_query,'x'))) not like 'SELECT%') 48 then 49 raise_application_error 50 (-20001, 'This must be a select only' ); 51 end if; 52 dbms_sql.parse( g_cursor, p_query, dbms_sql.native ); 53 g_query := p_query; 54 end if; Now we are ready to bind the inputs to this query. Any non-NULL names that were passed to us will be "bound" to the query so when we execute it, it finds the right row: 55 bind_variable( p_name1, p_bind1 ); 56 bind_variable( p_name2, p_bind2 ); 57 bind_variable( p_name3, p_bind3 ); 58 bind_variable( p_name4, p_bind4 ); 59 And now we are ready to execute the query and fetch the row. Then using DBMS_SQL.COLUMN_VALUE_LONG, we extract the necessary substring of the long and return it: 60 dbms_sql.define_column_long(g_cursor, 1); 61 if (dbms_sql.execute_and_fetch(g_cursor)>0) 62 then 63 dbms_sql.column_value_long 64 (g_cursor, 1, p_for, p_from-1, 65 l_buffer, l_buffer_len ); 66 end if; 67 return l_buffer; 68 end substr_of; 69 70 end; 71 / Package body created. And that is it, you should be able to use that package against any legacy LONG column in your database, allowing you to perform many "where clause" operations that were not possible before, for example, to find all partitions in your schema such that the HIGH_VALUE has the year 2003 in it: ops$tkyte@ORA10G> select * 2 from ( 3 select table_owner, table_name, partition_name, 4 long_help.substr_of 5 ( 'select high_value 6 from all_tab_partitions 7 where table_owner = :o 8 and table_name = :n 9 and partition_name = :p', 10 1, 4000, 11 'o', table_owner, 12 'n', table_name, 13 'p', partition_name ) high_value 14 from all_tab_partitions 15 where table_name = 'T' 16 and table_owner = user 17 ) 18 where high_value like '%2003%' 19 / TABLE_OWN TABLE PARTIT HIGH_VALUE --------- ----- ------ ------------------------------ OPS$TKYTE T PART1 TO_DATE(' 2003-03-13 00:00:00' , 'SYYYY-MM-DD HH24:MI:SS', 'N LS_CALENDAR=GREGORIAN') OPS$TKYTE T PART2 TO_DATE(' 2003-03-14 00:00:00' , 'SYYYY-MM-DD HH24:MI:SS', 'N LS_CALENDAR=GREGORIAN') Using this same technique - that of processing the result of a query that returns a single row with a single LONG column in a function - you can implement your own INSTR, LIKE and so on as needed. This implementation works well on the LONG type but will not work on LONG RAW types. LONG RAWs are not piecewise accessible (there is no COLUMN_VALUE_LONG_RAW function in DBMS_SQL). Fortunately, this is not too serious of a restriction since LONG RAWs are not used in the dictionary and the need to "substring" so you can search on it is rare. If you do have a need to do so however, you will not be using PLSQL unless the LONG RAW is 32k or less, there is simply no method for dealing with LONG RAWS over 32k in PLSQL itself. Java, C, C++, Visual Basic or some other language would have to be used. Another approach would be to temporarily convert the LONG or LONG RAW into a CLOB or BLOB using the TO_LOB built in function and a global temporary table. Your PLSQL procedure could: Insert into global_temp_table ( blob_column ) select to_lob(long_raw_column) from t where This would work well in an application that occasionally needed to work with a single LONG RAW value - you would not want to be continuously doing that however due to the amount of work involved. If you find yourself needing to resort to this technique frequently, you would definitely convert the LONG RAW to a BLOB once and be done with it.
Very Useful August 29, 2005 - 10pm Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
Updating a value in a long column November 18, 2005 - 1pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Ajums T T from Mumbai, India
Hi Tom, We have a table with a long column datatype. In this long column datatype there are several instances of the word "Bombay". We have to replace all instances of "Bombay" with "Mumbai". There are over 15000 such records and its painful to do this one at a time through the front-end. Can we write a PL/SQL block to do this at one go? Followup November 18, 2005 - 3pm Central time zone: how long are the longs?
November 18, 2005 - 1pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Ajums TT
We are using Oracle 8.1.7. I forgot to mention that earlier. Regards, Ajums TT
November 19, 2005 - 12pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Ajums TT from mumbai, india
The longs are about 5 pages of text in some of the rows. In others where its less than 4000 bytes, I wrote a piece of code in VB and that worked for them. For this 5 pages of text, there are 3 rows that still need to be updated. Followup November 19, 2005 - 1pm Central time zone: If "5 pages" is less then 32k - you can use plsql. begin for x in ( select * from t ) loop x.long_col := replace( x.long_col, .... ) update t set long_col = x.long_col where .... end loop; I've no idea what "5 pages" is though.
What is 5 pages. November 19, 2005 - 1pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Ajums TT from Mumbai, India
5 pages is a printout on A4 size paper. I am certain that one of the rows has crossed 32 K in size. Followup November 19, 2005 - 2pm Central time zone: that still doesn't tell me how big 5 pages is :) but if they are over 32k in size, you need something beyond SQL and PLSQL to deal with them.
worked for 4 rows November 19, 2005 - 4pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Ajums TT from Mumbai, India
Hi Tom, Your technique has updated all but 1 row in my table. I am certain its more than 32 K of data in this particular row. If I need something beyond Pl /SQL or SQL to fix this, where should I be looking at? I have been having a lot of sleepless nights for some time now. Regards, Ajums TT Followup November 19, 2005 - 8pm Central time zone: look to the application that puts it into the database in the first place??? what is it written in.
Explicitly checking for SELECT instead of pragma restrict_references(WNDS' November 20, 2005 - 4am Central time zone
Bookmark | Bottom | Top
Reviewer:
Jagjeet Singh from Delhi, India
Hi, in Long_help package you are checking explicitly checking whether this text is "SELECT" or not. Any specific reason for not using Pragma restrict_reference(....,WNDS); Js Followup November 20, 2005 - 8am Central time zone: because you haven't needed to use restrict references for many releases, so I don't bother.
Varchar to Nvarachar migration November 22, 2005 - 12am Central time zone
Bookmark | Bottom | Top
Reviewer:
thirumaran from INDIA
Hi Tom, Is it possible to move data from a varchar2 datatype in Oracle 9i solaris Server to an NVARACHAR2 oracle 10g R2 Windows 2003 server datatype. Oracle 9i is using the default oracle character set whereas the proposed Oracle 10g R2 Database character set will be AL32UTF8 sets and the National character set will be UTF 16 while creating the database. Thanks in adv Thirumaran Thanks in adv Thirumaran. Followup November 22, 2005 - 8am Central time zone: yes it is.
Char to Nvarchar conversion November 22, 2005 - 2am Central time zone
Bookmark | Bottom | Top
Reviewer:
Banu from INDIA
Dear Tom, 1) Can i move my Table columns data from Char to NVarchar datatype. Examples are available only form CHAR to NCHAR . 2) What is the difference between unicode units VS Bytes. i am not clear with the text from this site: http://www.cs.umb.edu/cs634/ora9idocs/server.920/a96529/ch2.htm#104327 Oracle9i Database Globalization Support Guide Release 2 (9.2) Part Number A96529-01 "When the NCHAR character set is AL16UTF16, ename can hold up to 10 Unicode code units. When the NCHAR character set is AL16UTF16, ename can hold up to 20 bytes." please illustrate with examples. 3)i have a DB with western alphabets(9i) when i move data to a DB which has different Database character set(AL32UTF8) & National character set(UTF-16) what are the impacts ? please guide me on this. regards Banu Followup November 22, 2005 - 8am Central time zone: 1) yes. 2) basically it is saying that unicode takes one OR MORE bytes to store a single character, it is a multi-byte encoding scheme. when you say "varchar2(20)", you get 20 bytes - it might be able to hold 20 characters, then again - maybe not. 3) you'll be going from single byte per character to multi-byte. You might find that 40 characters no longer fits in the varchar2(40) which is 40 bytes as 40 characters might need 80, 120, 160 or even more bytes.
converting LONG to NUMBER datatype November 25, 2005 - 6am Central time zone
Bookmark | Bottom | Top
Reviewer:
Suvendu from Bangalore, INDIA
Hi Tom, Here my problem with high_value column in user_tab_partitions not with the query. If my query is written wrong way, could you please correct me here. The scenarion is like: When I'm getting low range value partition to an existing partition table, I wants find out where this new partition going to fit in between existing two partitions and after getting it, I need to : Split the highest bound partition with new partition key value. To find out same here is my query, but BETWEEN ..AND .. clause going fail on LONG column high_value, I tried with utl_raw package relating this thread along with CAST( high_value AS NUMBER) option too, but not getting solved the problem. Could you, please provide any good option to do same, if any and to solve the problem here too? SQL> select lower.partition_name, lower.high_value, higher.partition_name, higher.high_value from (select partition_name, high_value from user_tab_partitions where table_name='FACT_TAB') lower, (select partition_name, high_value from user_tab_partitions where table_name='FACT_TAB') higher where lower.partition_name=higher.partition_name and 1117324802 between lower.high_value and higher.high_value; ORA-00997: illegal use of LONG datatype It's in Oracle 9.2 on HP Unix. Thanking you a lot for your kind consideartion to my question. Regards, Suvendu Followup November 25, 2005 - 10am Central time zone: http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:839298816582#44365156010493
10g DATE datatypes vs. 8i DATE datatypes March 15, 2006 - 6pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Sinan Topuz from NYC USA
Tom, The following package was working well on 8i. I migrated it to 10g and I got an error message "ORA-00932:inconsistent datatypes: expected - got -". Could you explain me what this means please? Thanks in advance. Sinan VIEW ---- Name Type --------------------- ------------- SH_HEADER_ID VARCHAR2(30) SH_DATE DATE SH_SERIAL VARCHAR2(2) SH_NUMBER VARCHAR2(30) SH_STATUS VARCHAR2(2) SH_TYPE VARCHAR2(3) SH_CUST_ID VARCHAR2(30) SH_CUST_PHONE VARCHAR2(30) SH_PO_NUMBER VARCHAR2(30) SH_DEPT_CODE VARCHAR2(10) SH_STORE_CODE VARCHAR2(10) SH_TERMS_CODE VARCHAR2(5) SH_SHIPMENT_TYPE VARCHAR2(30) SH_SHIPTO_ATTN VARCHAR2(100) SH_SHIPTO VARCHAR2(100) SH_SHIPTO_ADDRESS1 VARCHAR2(100) SH_SHIPTO_ADDRESS2 VARCHAR2(100) SH_SHIPTO_CITY VARCHAR2(100) SH_SHIPTO_STATE VARCHAR2(5) SH_SHIPTO_ZIP VARCHAR2(15) SH_SHIPTO_COUNTRY VARCHAR2(30) SH_SHIPTO_PHONE VARCHAR2(30) SH_SHIPTO_FAX VARCHAR2(30) SH_BILLTO VARCHAR2(100) SH_BILLTO_ADDRESS1 VARCHAR2(100) SH_BILLTO_ADDRESS2 VARCHAR2(100) SH_BILLTO_CITY VARCHAR2(100) SH_BILLTO_STATE VARCHAR2(5) SH_BILLTO_ZIP VARCHAR2(15) SH_BILLTO_COUNTRY VARCHAR2(30) SH_BILLTO_PHONE VARCHAR2(30) SH_BILLTO_FAX VARCHAR2(30) SH_SALESPERSON VARCHAR2(30) SH_STARTDATE DATE SH_CANCELDATE DATE SH_SPECIALINSTRUCTION VARCHAR2(254) SH_ENTEREDBY VARCHAR2(30) SH_ENTERED_DATE DATE SH_UPDATEDBY VARCHAR2(30) SH_UPDATED_DATE DATE SH_BFVAT_AMOUNT NUMBER SH_VAT_AMOUNT NUMBER SH_NET_AMOUNT NUMBER SH_CCARDNO VARCHAR2(30) SH_CCARDEXP DATE SH_CCARDNAME VARCHAR2(60) CR_CUST_CODE VARCHAR2(30) CR_CUST_NAME VARCHAR2(100) CR_EMAIL VARCHAR2(254) CR_PHONE VARCHAR2(30) CR_FAX VARCHAR2(30) SP_NAME VARCHAR2(100) SP_ADDRESS1 VARCHAR2(100) SP_ADDRESS2 VARCHAR2(100) SP_CITY VARCHAR2(100) SP_ZIP VARCHAR2(15) SP_COUNTRY VARCHAR2(30) SP_PHONE1 VARCHAR2(30) SP_PHONE2 VARCHAR2(30) SP_PAGER VARCHAR2(30) SP_FAX VARCHAR2(30) SP_EMAIL VARCHAR2(254) The error occurs when the cursor is opened because if I comment the cursor loop, I don't get the error message. Could this be something related to DATE formats??? CREATE OR REPLACE PACKAGE sl_view AS PROCEDURE orderlist( wsid IN VARCHAR2 ,pfind IN VARCHAR2 DEFAULT '' ,pbeg_date IN VARCHAR2 DEFAULT to_char(sysdate-15, 'MM/DD/YYYY') ,pend_date IN VARCHAR2 DEFAULT to_char(sysdate, 'MM/DD/YYYY') ,pslsperson IN VARCHAR2 DEFAULT '' ,pstatus IN VARCHAR2 DEFAULT '' ,pfilter IN VARCHAR2 DEFAULT '' ,pgn_beg IN NUMBER DEFAULT 1 ); END sl_view; / CREATE OR REPLACE PACKAGE BODY sl_view AS /* LIST ORDERS */ PROCEDURE orderlist( wsid IN VARCHAR2 ,pfind IN VARCHAR2 DEFAULT '' ,pbeg_date IN VARCHAR2 DEFAULT to_char(sysdate-15, 'MM/DD/YYYY') ,pend_date IN VARCHAR2 DEFAULT to_char(sysdate, 'MM/DD/YYYY') ,pslsperson IN VARCHAR2 DEFAULT '' ,pstatus IN VARCHAR2 DEFAULT '' ,pfilter IN VARCHAR2 DEFAULT '' ,pgn_beg IN NUMBER DEFAULT 1 ) IS CURSOR csid IS SELECT SD_SID, SD_USER_CODE, SD_SERVER, SD_LOGIN_DATE, SD_EXP_DATE, SD_IP FROM SL_SESSION WHERE SD_SID = wsid ; RSID CSID%ROWTYPE; max_line_page integer; SQ varchar2(5000); SQW varchar2(5000); max_page_num integer; P_FIRST number; P_LAST number; F_PAGE integer; Z_MAXNUM VARCHAR2(25); max_var number; page_write_count integer; TYPE typ1 IS REF CURSOR; ref_cr typ1; v_SH_HEADER_ID SL_SLSORDERS.SH_HEADER_ID %TYPE; v_SH_DATE SL_SLSORDERS.SH_DATE %TYPE; v_SH_SERIAL SL_SLSORDERS.SH_SERIAL %TYPE; v_SH_NUMBER SL_SLSORDERS.SH_NUMBER %TYPE; v_SH_STATUS SL_SLSORDERS.SH_STATUS %TYPE; v_SH_TYPE SL_SLSORDERS.SH_TYPE %TYPE; v_SH_CUST_ID SL_SLSORDERS.SH_CUST_ID %TYPE; v_SH_PO_NUMBER SL_SLSORDERS.SH_PO_NUMBER %TYPE; v_SH_DEPT_CODE SL_SLSORDERS.SH_DEPT_CODE %TYPE; v_SH_STORE_CODE SL_SLSORDERS.SH_STORE_CODE %TYPE; v_SH_TERMS_CODE SL_SLSORDERS.SH_TERMS_CODE %TYPE; v_SH_SHIPMENT_TYPE SL_SLSORDERS.SH_SHIPMENT_TYPE %TYPE; v_SH_SALESPERSON SL_SLSORDERS.SH_SALESPERSON %TYPE; v_SH_STARTDATE SL_SLSORDERS.SH_STARTDATE %TYPE; v_SH_CANCELDATE SL_SLSORDERS.SH_CANCELDATE %TYPE; v_SH_ENTEREDBY SL_SLSORDERS.SH_ENTEREDBY %TYPE; v_SH_ENTERED_DATE SL_SLSORDERS.SH_ENTERED_DATE %TYPE; v_SH_UPDATEDBY SL_SLSORDERS.SH_UPDATEDBY %TYPE; v_SH_UPDATED_DATE SL_SLSORDERS.SH_UPDATED_DATE %TYPE; v_SH_BFVAT_AMOUNT SL_SLSORDERS.SH_BFVAT_AMOUNT %TYPE; v_SH_VAT_AMOUNT SL_SLSORDERS.SH_VAT_AMOUNT %TYPE; v_SH_NET_AMOUNT SL_SLSORDERS.SH_NET_AMOUNT %TYPE; v_CR_CUST_CODE SL_SLSORDERS.CR_CUST_CODE %TYPE; v_CR_CUST_NAME SL_SLSORDERS.CR_CUST_NAME %TYPE; v_SP_NAME SL_SLSORDERS.SP_NAME %TYPE; beg_date DATE; end_date DATE; ERR1 EXCEPTION; ERR_MSG VARCHAR2(250); pSID SL_SESSION.SD_SID%TYPE; BEGIN pSID := wsid; sl_security_check.checkit(pSID); max_line_page := TO_NUMBER(get_lookupvalue('SYSTEM', 'MAX_PAGE_LINE')); OPEN CSID; FETCH CSID INTO RSID; IF CSID%NOTFOUND OR RSID.SD_SID IS NULL THEN CLOSE CSID; ERR_MSG := 'Please close the browser window and sign in again!'; RAISE ERR1; END IF; CLOSE CSID; htp.p('
Followup March 16, 2006 - 7am Central time zone: tell you what - get the example down to say - 3 or 4 columns and the procedure down to say 10 or so lines of code. eg: start the debugging process, isolate the problem, remove all non-relevant bits. then and only then can we look at it. You've got 10 or more pages of code here, I'm not a compiler, I cannot even compile and run your code as provided to reproduce the issue. If you want someone to look at it, make it reproduce in a SMALL SMALL SMALL test case that is 100% complete.
Problem has been fixed March 27, 2006 - 12pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Sinan Topuz from NYC USA
Tom, First of all, I apologize for just copying and pasting the code. I fixed the problem. The problem was in the dynamically created SQL sentence. In 8i, it does not matter wether you declare a variable to select into for ROWNUM, but in 10g I had to declare one more variable to hold ROWNUM. Thanks again for your time.
long formatting April 10, 2006 - 5pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Sah
Hi Tom, We have RTF files stored in a long column in a table. I want to convert RTF into text and/or html. I want to write a function that converts RTF to regular text. (HTML is also ok, so the web reports will have the formats preserved). We want this to be 8i compatible as well if possible. Currently using 8.1.7.2.1 but will be migrating to 9.2.0.6 shortly. After reading through your notes, I wrote a procedure that converts long to long raw to blob to clob (filters the text from blob using ctx_doc.ifilter, ctx_doc.filter). This conversion is done because we can only convert long->clob, long raw->blob. I want the data in blob because ctx_doc.ifilter will accept blob for filtering the text from it. Firstly, this gives following error when converting long to long raw.Secondly, I want this to work in 8i also; I understand ctx_doc.filter is not present in 8i. Please advise. Thanks. ______________________________________________________ create global temporary table lob_temp ( id int primary key, c_lob clob, b_lob blob ) ; create sequence lob_temp_seq; SQL9i> CREATE OR REPLACE procedure RTFtoHTML( 2 p_seqno in number, 3 p_tname in varchar2, 4 p_cname in varchar2, 5 p_sname in varchar2, 6 p_clob out clob) 7 AS 8 l_blob blob; 9 l_clob clob := empty_clob() ; 10 sqlstr varchar2(32000); 11 l_longraw long raw; 12 l_lomg long; 13 l_id int; 14 begin 15 -- convert long to long raw 16 sqlstr := 'select utl_raw.cast_to_raw(' || p_cname || ') 17 from ' || p_tname || ' where '||p_sname ||'= :seqno' ; 18 execute immediate sqlstr into l_longraw using IN p_seqno; 19 20 -- convert long raw to blob 21 select lob_temp_seq.nextval into l_id from dual; 22 sqlstr := 'insert into lob_temp (id,b_lob) select :id, to_lob(' || l_longraw || ') 23 from dual' ; 24 execute immediate sqlstr using IN l_id; 25 26 -- filter clob from blob 27 select b_lob into l_blob from lob_temp where id = l_id ; 28 ctx_doc.ifilter( l_blob, l_clob ); 29 update lob_temp set c_lob = l_clob where id = l_id ; 30 end; 31 / Procedure created. bps_proj1@BPSREP1> declare 2 l_clob clob:=empty_clob() ; 3 begin 4 RTFtoHTML(370, 5 'MR_NOTES', 6 'MRNO_NOTE', 7 'MRNO_SEQNO', 8 l_clob); 9 end; 10 / declare * ERROR at line 1: ORA-00997: illegal use of LONG datatype ORA-06512: at "BPS_PROJ1.RTFTOHTML", line 18 ORA-06512: at line 4 Followup April 11, 2006 - 2pm Central time zone: that is not going to work for anything over 4000 bytes using sql. you are saying you have a RTF file stored in long?
long formatting April 13, 2006 - 11pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Sah
Hi Tom, that is not going to work for anything over 4000 bytes using sql. -- this does not work for 100 bytes also. Please see the email I've sent containing information. bps_proj1@BPSREP1> declare 2 l_clob clob:=empty_clob() ; 3 begin 4 RTFtoHTML( 5 390.99999, 6 'MR_NOTES', 7 'MRNO_NOTE', 8 'MRNO_SEQNO', 9 l_clob); 10 end; 11 / declare * ERROR at line 1: ORA-00997: illegal use of LONG datatype ORA-06512: at "BPS_PROJ1.RTFTOHTML", line 18 ORA-06512: at line 4 you are saying you have a RTF file stored in long? -- yes, the user types in text in the application and then the application control (app is in VB) stored this data in a long field in the database as RTF. Please see screen shots in my email. After saving in the application, teh user can view the saved data properly. Thanks. Followup April 14, 2006 - 12pm Central time zone: I don't open word attachments in email from people I don't know (no MS attachments really). and a screen shot is totally not necessary to show anything. no idea what is in your table or how you arrived at 100 bytes?
long formatting April 14, 2006 - 3pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Sah
Hi Tom, The field mrno_notes in defined as long in mr_notes table. Following is what the user types in from application (written in VB): Hello!! First item Second item The user saves the application and then following is what is stored in database (I got it from Toad). It seems the application control is converting "what the user types in " and storing it as "RTF" in the database in the long field: {/rtf1/ansi/ansicpg1252/deff0/deflang1033{/fonttbl{/f0/fswiss/fprq2/fcharset0 Times New Roman;}{/f1/fswiss/fprq2 Times New Roman;}{/f2/fnil/fcharset2 Symbol;}} /viewkind4/uc1/pard/b/protect/f0/fs20 Hello!!/f1/par /protect0/pard/protect{/pntext/f2/'B7/tab}{/*/pn/pnlvlblt/pnf2/pnindent0{/pntxtb/'B7}}/fi-180/li180/ f0 First item/f1/par /f0{/pntext/f2/'B7/tab}Second item/f1/par } The user can close the application, open it again and view the same thing again. It shows up clearly in the application as following i.e. without any apparent RTF tags. Hello!! First item Second item I want the same behaviour for Web Reports. Presently, when we display the field mr_notes in web reports, then it is displayed as {/rtf1/ansi/ansicpg1252/deff0/deflang1033{/fonttbl{/f0/fswiss/fprq2/....... I want to filter text/html from this stored rtf. Also it'll be good if I can do it in both 8i and 9i please. no idea what is in your table or how you arrived at 100 bytes? -- I estimated the following "what the user types in" to be 100 bytes. Hello!! First item Second item --following is output from sqlplus bps_proj1@BPSREP1> desc mr_notes; Name Null? Type ----------------------------------------------------------------------- -------- ----------------------- MRNO_SEQNO NOT NULL NUMBER MRNO_VTYP_TYPE NOT NULL VARCHAR2(10) MRNO_DATASEQNO NOT NULL NUMBER MRNO_DATE NOT NULL DATE MRNO_ORDER NUMBER MRNO_KEYWORD VARCHAR2(20) MRNO_BUSR_ID VARCHAR2(30) MRNO_NOTE LONG bps_proj1@BPSREP1> select * from mr_notes where mrno_seqno=390.99999; MRNO_SEQNO MRNO_VTYP_ MRNO_DATASEQNO MRNO_DATE MRNO_ORDER MRNO_KEYWORD MRNO_BUSR_ID ---------------- ---------- ---------------- --------- ---------------- -------------------- ------------ MRNO_NOTE -------------------------------------------------------------------------------- 390.99999 MRMTLHDR 25.99999 13-APR-06 ? ? HSIMMI {/rtf1/ansi/ansicpg1252/deff0/deflang1033{/fonttbl{/f0/fswiss/fprq2/fcharset0 Ti mes New Roman;}{/f1/fswiss/fprq2 Times New Roman;}{/f2/fnil/fcharset2 Symbol;}} /viewkind4/uc1/pard/b/protect/f0/fs20 Hello!!/f1/par /protect0/pard/protect{/pntext/f2/'B7/tab}{/*/pn/pnlvlblt/pnf2/pnindent0{/pntxtb /'B7}}/fi-180/li180/f0 First item/f1/par /f0{/pntext/f2/'B7/tab}Second item/f1/par } Thanks. Followup April 14, 2006 - 4pm Central time zone: right, the user typed in 100 bytes. but what did you STORE in the database - that would appear "more" - use dbms_lob.getlength and see what size it is, that is likely why you cannot use sql on it as stated.
long formatting April 16, 2006 - 8pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Shha
Hi Tom, bps_proj1@BPSREP1> select dbms_lob.getlength(MRNO_NOTE) from mr_notes where mrno_seqno=390.99999; select dbms_lob.getlength(MRNO_NOTE) from mr_notes where mrno_seqno=390.99999 * ERROR at line 1: ORA-00997: illegal use of LONG datatype ____________________________________________________________ I'm not able to see the long size. Moreover, my goal is : I want to display this (without RTF tags) for Web Reports. Presently, when we display the field mr_notes in web reports, then it is displayed as {/rtf1/ansi/ansicpg1252/deff0/deflang1033{/fonttbl{/f0/fswiss/fprq2/....... I want to filter text/html from this stored rtf. Also it'll be good if I can do it in both 8i and 9i please. Thanks. Followup April 16, 2006 - 8pm Central time zone: that would be because - a long is NOT a lob and dbms_lob works on, well, lobs only. http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:510622111991#29261383968124 seems we've been over this however - about how to convert rtf into text (and how it works in certain versions but not others?) else you'll be getting the RTF specification from somewhere and writing your own filter...
long formatting April 24, 2006 - 8pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Shha
Tom, I tried the procedure with 32k bytes of the long. But it still gives error: SQL> CREATE OR REPLACE procedure RTFtoHTML( 2 p_seqno in number, 3 p_tname in varchar2, 4 p_cname in varchar2, 5 p_sname in varchar2, 6 p_clob out clob) 7 AS 8 l_blob blob; 9 l_clob clob := empty_clob() ; 10 sqlstr varchar2(32000); 11 l_sqlstr varchar2(32000); 12 l_longraw long raw; 13 l_lomg long; 14 l_id int; 15 l_long32 long; 16 begin 17 18 -- convert long to long raw 19 l_sqlstr := 'select '||p_cname||' FROM '|| p_tname||' where '||p_sname||'='||p_seqno; 20 l_long32:=longsubstr(l_sqlstr ,1,32000); 21 sqlstr := 'select utl_raw.cast_to_raw(' || l_long32 || ') from dual' ; 22 execute immediate sqlstr into l_longraw ; 23 24 -- convert long raw to blob 25 select lob_temp_seq.nextval into l_id from dual; 26 sqlstr := 'insert into lob_temp (id,b_lob) select :id, to_lob(' || l_longraw || ') 27 from dual' ; 28 execute immediate sqlstr using IN l_id; 29 30 select lob_temp_seq.nextval into l_id from dual; 31 sqlstr := 'insert into lob_temp (id,b_lob) select :id, :longraw from dual' ; 32 execute immediate sqlstr using IN l_id, l_longraw; 33 34 -- filter clob from blob 35 select b_lob into l_blob from lob_temp where id = l_id ; 36 ctx_doc.ifilter( l_blob, l_clob ); 37 update lob_temp set c_lob = l_clob where id = l_id ; 38 end; 39 / Procedure created. SQL> declare 2 l_clob clob:=empty_clob() ; 3 begin 4 RTFtoHTML( 5 390.99999, 6 'MR_NOTES', 7 'MRNO_NOTE', 8 'MRNO_SEQNO', 9 l_clob); 10 end; 11 / declare * ERROR at line 1: ORA-00911: invalid character ORA-06512: at "BPS_PROJ1.RTFTOHTML", line 22 ORA-06512: at line 4 The problem is it is not converting long to longraw. I need to do that so I can then convert long raw to blob; and filter text from the rtf stored in blob. I searched the site, I didn't find way to filter text from rtf stored in long. Please advise if my procedure is okay, Thanks. Followup April 25, 2006 - 12am Central time zone: 18 -- convert long to long raw 19 l_sqlstr := 'select '||p_cname||' FROM '|| p_tname||' where '||p_sname||'='||p_seqno; 20 l_long32:=longsubstr(l_sqlstr ,1,32000); 21 sqlstr := 'select utl_raw.cast_to_raw(' || l_long32 || ') from dual' ; 22 execute immediate sqlstr into l_longraw ; 23 that code does not make sense. I don't get what you are trying to do at all on like 21 and 22 there. If you are trying to take 32000 bytes of long and consider it raw, you would replace those lines with l_longraw := utl_raw.cast_to_raw( l_long32 );
Absolutely great tip November 28, 2006 - 3pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Prashant from NJ
Tom, The getlong function literally saved me as I was trying meet deadline.I was having problems due to a long column in a table and had no clue how to convert it.
Long to Varchar2 conversion - search the content in long column December 5, 2006 - 3pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Gordon from Toronto Canada
Thanks Tom! I implemented the Getlong function in 9i: create or replace function getlong_9(p_tname in varchar2,p_cname in varchar2,p_rowid in rowid) return varchar2 is l_long_val varchar2(4000); begin execute immediate 'select '||p_cname||' from '||p_tname||' where rowid='''||p_rowid||'''' INTO l_long_val; return(l_long_val); end getlong_9; It works on the following query: select c1,text,getlong_9('T','TEXT',rowid) from t where instr(getlong_9('T','TEXT',rowid),'No') > 0; Apprently, it does full-table scan. Since there is "rowid" in the function, I got errors when tried to create function based index. How do I do to improve the performance? Thanks a lot. Followup December 6, 2006 - 9am Central time zone: USE BIND VARIABLES!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! arg, .... ' where rowid = :x' using p_rowid into l_long_val and this function is going to fail when your long is actually long, you have to use dbms_sql to piecewise fetch.
December 6, 2006 - 2pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Gordon from Toronto Canada
USE BIND VARIABLES!!!!!!!!!!!!!!!!!!!!!! -- Oh yeah! It runs 5 times faster now! "actually long" - do you mean the 32K? Based on your previous replies, pl/sql could not handle it if the long is longer than 32K. So why do I use dbms_sql? Thank you very much! Followup December 7, 2006 - 8am Central time zone: dbms_sql can piecewise fetch any bit of the long - regardless of the length. if the long exceeds 32k, the native dynamic stuff won't work very well.
long to varchar2 September 12, 2007 - 3am Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
I try function you have created but found following error. What cause the use of this function along with dblink raising an error ? Please correct me if I am wrong, Thanks. SYSTEM@ppc> create or replace function get_long(p_tname in varchar2,p_cname in varchar2, 2 p_rowid in rowid) 3 return varchar2 4 is 5 l_cursor integer default dbms_sql.open_cursor; 6 l_n number; 7 l_long_val varchar2(4000); 8 l_long_len number; 9 l_buflen number := 4000; 10 l_curpos number := 0; 11 begin 12 dbms_sql.parse( l_cursor, 13 'select ' || p_cname || ' from ' || p_tname || 14 ' where rowid = :x', 15 dbms_sql.native ); 16 dbms_sql.bind_variable(l_cursor,':x',p_rowid); 17 18 dbms_sql.define_column_long(l_cursor,1); 19 l_n := dbms_sql.execute(l_cursor); 20 21 if (dbms_sql.fetch_rows(l_cursor) > 0) 22 then 23 dbms_sql.column_value_long(l_cursor,1,l_buflen,l_curpos , 24 l_long_val,l_long_len); 25 end if; 26 27 dbms_sql.close_cursor(l_cursor); 28 return l_long_val; 29 end; 30 / Function created. SYSTEM@ppc> desc master.tt; Name Null? Type ----------------------------------------------------- -------- ------------------------------------ X LONG SYSTEM@ppc> desc master.tt@orcl6; Name Null? Type ----------------------------------------------------- -------- ------------------------------------ X LONG SYSTEM@ppc> select get_long('master.tt','x',rowid) from master.tt; GET_LONG('MASTER.TT','X',ROWID) ---------------------------------------------------------------------------------------------------- AAAAAAAAAAAAAAAAAAAAAAAAAaaaaaaaa SYSTEM@ppc> select get_long('master.tt@orcl6','x',rowid) from master.tt@orcl6; select get_long('master.tt@orcl6','x',rowid) from master.tt@orcl6 * ERROR at line 1: ORA-06553: PLS-306: wrong number or types of arguments in call to 'GET_LONG' SYSTEM@ppc> select * from master.tt@orcl6; X -------------------------------------------------------------------------------- AAAAAAAAAAAAAAaaaa a SYSTEM@ppc>
Function doesn't work for Long to Varchar January 3, 2008 - 4pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Maverick
Tom, I am trying to get data_default column value from user_tab_columns and do some string
attachments to it. Like
and function f_long_char is : create or replace function f_long_char(PinVal in Long) return varchar2 is l_str long; begin select pinVal into l_str from dual; return substr(l_str,1,100); end; Still I keep getting "ORA-00997: Illegal use of LONG datatype". What else can I do? the values in this column are like 1,sysdate,null etc.. Any suggestions? Followup January 3, 2008 - 5pm Central time zone: you have to take the "primary key" of user_tab_columns and pass that into the routine, that routine fetches a long (as long as it is 32k or less!!!) and can return the substr.ops$tkyte%ORA10GR2> create or replace 2 function get_data_default 3 ( p_tname in user_tab_columns.table_name%type, 4 p_cname in user_tab_columns.column_name%type 5 ) return varchar2 6 is 7 l_long long; 8 begin 9 select data_default into l_long 10 from user_tab_columns 11 where table_name = p_tname 12 and column_name = p_cname; 13 14 if ( l_long is not null ) 15 then 16 return 'Default ' || substr( l_long, 1, 3000 ); 17 else 18 return null; 19 end if; 20 end; 21 / Function created. ops$tkyte%ORA10GR2> ops$tkyte%ORA10GR2> create table t ( x int default 5, y varchar2(10), z date default sysdate ); Table created. ops$tkyte%ORA10GR2> select table_name, 2 column_name, 3 get_data_default( table_name, column_name ) d_default 4 from user_tab_columns 5 where table_name = 'T' 6 / TABLE_NAME COLUMN_NAM D_DEFAULT ---------- ---------- --------------- T X Default 5 T Y T Z Default sysdate
Long to Varchar2 January 3, 2008 - 10pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Maverick
Thanks Tom. I got it now. Thanks alot. I'll give it a try.
long January 5, 2008 - 11pm Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
TOm: 1, When do you use a "Long" type versus "CLOB" type when you need to store something over 4000 bytes. 2. Can you query or index both same way. 3. can you you use insert statement for both types. thanks Followup January 7, 2008 - 8am Central time zone: 1) never2) no, longs come with many restrictions 3) yes, but there are differences in what you do and how you do it. use clobs.
clob January 8, 2008 - 7pm Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
Tom YOu mean you never use "LOng" type? Always "CLOB". is this correct? wondering why. Followup January 8, 2008 - 7pm Central time zone: correct, do not use long, use clob.for the reasons listed above? mostly: longs come with many restrictions
clob January 20, 2008 - 1pm Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
Tom: I tested a sample CLOB column and it seems you can use INSERT and SELECT statment ust like a varchar2 column? correct. 2. If you are saving the URL and/or html form getting posted to a procedure and the XML output of that procedure would you set up the two columns at CLOB type if you are not sure how the sizes would play out? input of course would not be huge. Output depends on the query and data in database. It could be large. Followup January 20, 2008 - 7pm Central time zone: 1) sure, up to a LIMIT. the limit of a varchar2.2) if you anticipate on needing to store more than 4000 bytes in an attribute of text, yes.
clob January 20, 2008 - 10pm Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
Tom: I cant really tell if there will be more than 4000 bytes. But is there anything to lose be setting the columns to CLOB type? You mention that you can only see the first 4000 bytes in select statement, how can i see more if i want to. Followup January 21, 2008 - 8am Central time zone: .... I cant really tell if there will be more than 4000 bytes. ...then you need to do a little research, ask a few questions, think about this - if you don't know what you are getting, how can you even write your application. You can see it "all" in a select statement - but only because select's do not print themselves, you print them and you have access to ALL of the data. You can only select out a 4000 byte slice of a clob as a varchar2 in a select statement, but you can certainly select out the clob without converting to varchar2 and get the ENTIRE thing
CLOB January 21, 2008 - 10am Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
Tom: It is a similar situation to how you are storing the followup questions. Are you using a "CLOB" type. One user may have 2-3 lines and another may have 5-6 pages. You can't really tell unless you limit the user yourself to certain number of characters. I assume you can always go back and change "CLOB" to "VARCHAR2" if you notice that CLOB is not required later after you see the stored data. Followup January 21, 2008 - 9pm Central time zone: yes, I use clob - because each might be at least 32k in size.You either a) say it will be 4000 bytes or less or you say b) it will be as much as you want once YOU make that decision, the datatype will be obvious.
clob January 21, 2008 - 9pm Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
tom: a clob can hold 4 gig bytes and a pl/sql variable 32 K. How do you insert a 1 m bytes or 1 g bytes in to the column using insert statement. Followup January 21, 2008 - 10pm Central time zone: A clob can hold terabytes.You would a) insert an empty_clob(), returning it into a plsql variable, you have a LOB LOCATOR now, much like a FILE HANDLE in any other programming language b) you would use dbms_lob to write to it, append to it, copy to it - piece by piece.
clob January 21, 2008 - 10pm Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
Tom: 1. What is the size limit of CLOB in 9i? 2. Is clob data always stored in external file regardless of size? 3. Now, based on what you say I should never use insert into table values (my_clob) unless I know that that the max data i have is 32 K byte correct? I should always use DBMS_LOB. Followup January 22, 2008 - 6am Central time zone: 1) 4gb in that old release2) never is it stored in an external file - NEVER. We have read only bfiles - which you put data into a file and then point to it. we do NOT store things in external files. If you have access to Expert Oracle Database Architecture, I cover the ins and outs of this datatype in detail. Basically, by default, if the lob is 4000 bytes or less, it is stored inline much like a varchar2 (in the base table). If it exceeds 4000 bytes, then a pointer to an axillary data structure is stored in the table and the lob itself is stored in chunks in another segment. 3) probably, but if you are using plsql exclusively to interface with this clob (like I do here on asktom), I KNOW my lobs will be 32k or less since PLSQL can only accept 32k.... If you are using some other language, you would use the api's there. No, you would not ALWAYS use dbms_lob, it depends on what LANGUAGE you are programming in
clob January 30, 2008 - 7pm Central time zone
Bookmark | Bottom | Top
Reviewer:
A reader
Tom: I saved a 100 character into CLOB column and when i query the column in sql*plus i am only seeing 81 and then it cuts it off. I thought it would show up to 4000. when i did set long 2000 it did show the remaining text. can you explain what is happening. i thougt set long was for long data type. Followup January 31, 2008 - 3am Central time zone: SQL> set long nnnnnnnwhere nnnnnn is the amount of longs/lobs you want displayed, show long will show you what your current value is.... set long is for longs and lobs both.
I am getting this error March 26, 2008 - 12pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Priya from Los Angeles, CA USA
I have created the function getlong you have above and I am having problems when I execute the sql using the function I used select getlong('sysuat.test_table@uat01','descr',rowid) from sysuat.test_table@uat01 gives me the error ORA-06553: PLS-306: wrong number or types of arguments in call to 'GETLONG' When I remove the quotes around the 'sysuat.test_table@uat01' then I get ORA-00904: "SYSUAT"."test_table": invalid identifier Why is this happening? Sorry if i didnt understand correctly. Please help. Followup March 26, 2008 - 4pm Central time zone: well, the getlong isn't going to work well over a database link.do this, create view test as select rowid rid from sysuat.test_table@uat01; desc test; post results.
REsults March 26, 2008 - 4pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Priya from Los Angeles, CA, USA
Hi Tom, Thanks for your response.I have created the view and here is the result when I desc my view SQL> create view test_long as select rowid rid from sysadm.PS_CBRE_OPP_WNT_VW@crmuat01; View created. SQL> desc test_long; Name Null? Type ----------------------------------------- -------- ---------------------------- RID ROWID SQL> Followup March 26, 2008 - 4pm Central time zone: ok, desc getlong for me and show me an actual cut and paste from sqlplus of it failing(but don't count on this ever working, the long api will not be dblink friendly, it won't fly - what you do you need to do, we can use the sqlplus COPY command to copy the data)
REsults March 26, 2008 - 4pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Priya from Los Angeles, CA ,USA
SQL> desc getlong; FUNCTION getlong RETURNS VARCHAR2 Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------- P_TNAME VARCHAR2 IN P_CNAME VARCHAR2 IN P_ROWID ROWID IN SQL> select getlong('test_long','DESCRLONG',rowid) from test_long; select getlong('test_long','DESCRLONG',rowid) from test_long * ERROR at line 1: ORA-01445: cannot select ROWID from, or sample, a join view without a key-preserved table ORA-02063: preceding line from CRMUAT01 SQL> Please bear with me if this is not what you wanted to see. Do I need to run the sql I use to run with the db link or is this what you wanted to see? Thanks for your help as always. Followup March 27, 2008 - 10am Central time zone: well, that is ENTIRELY different from what you said before:gives me the error ORA-06553: PLS-306: wrong number or types of arguments in call to 'GETLONG' basically, please reproduce the actual error you were getting - meaning, yes, run the actual code giving you a problem
March 26, 2008 - 5pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Priya from Los Angeles,CA,USA
Since this getlong function wont work very well with db links, can you guide me how to use the copy command to copy data over. Also, I need the data come over everyday from the other database. I am using a stored procedure to insert data into other tables using dblinks and have scheduled it to run daily Can this COPY command be used in the stored proc? Thanks for your help. Followup March 27, 2008 - 10am Central time zone: copy command cannot be used in a stored procedure, no, it is a sqlplus command:http://asktom.oracle.com/tkyte/Misc/MoveLongs.html
REsults March 27, 2008 - 12pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Priya from Los Angeles, CA,USA
Hi Tom, Here are the results.Thanks for your time. SQL> create view test_long as select rowid rid from sysadm.PS_CBRE_OPP_WNT_VW@crmuat01; View created. SQL> desc test_long; Name Null? Type ----------------------------------------- -------- ---------------------------- RID ROWID SQL> desc getlong; FUNCTION getlong RETURNS VARCHAR2 Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------- P_TNAME VARCHAR2 IN P_CNAME VARCHAR2 IN P_ROWID ROWID IN SQL> select getlong('sysadm.ps_cbre_opp_well@crmuat01','descrlong',rowid) from sysadm.ps_cbre_opp_we ll@crmuat01; select getlong('sysadm.ps_cbre_opp_well@crmuat01','descrlong',rowid) from sysadm.ps_cbre_opp_well@cr * ERROR at line 1: ORA-06553: PLS-306: wrong number or types of arguments in call to 'GETLONG' SQL> Followup March 27, 2008 - 1pm Central time zone: it would have to be the remote rowid that is causing the issue - this is not going to work over a dblink in any case - even if you got the rowid to pass down to the procedure - the dbms_sql stuff is going to fail on a long over the dblink, see sqlplus copy above.
Thank you. March 27, 2008 - 6pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Priya from Los Angeles, CA USA
Thank you Tom. Is there a reason why LONG datatype is still around? Followup March 30, 2008 - 8am Central time zone: backwards compatibility
Still today? June 26, 2008 - 4pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Mark Brady from Baltimore, MD
Is this package you've provided still the best that's around? you'd think that, by now, this would be in a built-in package, like DBMS_ASKTOM_GEMS.
interesting March 10, 2009 - 6pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Duke Ganote from Clermont County, Ohio USA
create or replace function f_column_expression ( p_table_name IN VARCHAR2, p_index_name IN VARCHAR2 ) return varchar2 as l_long LONG; begin SELECT column_expression into l_long FROM user_ind_expressions WHERE table_name = p_table_name AND index_name = p_index_name ; return substr( l_long, 1, 4000 ); end; / select table_Name, index_name , f_column_expression(table_name, index_name) exprsn from user_ind_expressions / TABLE_NAME INDEX_NAME ------------------------------ ------------------------------ EXPRSN -------------------------------------------------------------------------------- ---------------------------------------------------- O_POLICY_TRANS O_POLICY_TRANS_UNQ1 CASE WHEN "ODS_CURR_REC_IND"='Y' THEN TO_CHAR("INS_CORP_NBR")||"PRODUCT_CD"||TO _CHAR("POLICY_NBR")||"EFF_DT"||"TRANS_DT"||"TRANS_TM " END but I guess it works!
All_views to a table July 8, 2009 - 8pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Gabbar from Chambal
Hi Tom I am trying to store the data of all_views into a table with the text column as type varchar2. The text column(of type long) is less that 32K for all the views. How can I do that ? Followup July 8, 2009 - 8pm Central time zone: varchar2 only goes to 4000 bytes, not 32kyou will not use a varchar2 you can use a clob, you would just use TO_LOB(text) in your select list.
Varchar2 size declaration July 14, 2009 - 1am Central time zone
Bookmark | Bottom | Top
Reviewer:
Sreekanth Babu Vakiti from India
Really nice explaination GURU(BOSS). Can u pls clarify, when to use varchar2 and when to use char. 1)The data in the column will not exceed 500. If i declare the size as 4000 ie max size. What are the disadvantages i have over declaration of size as varchar2(500). 2)In another table for a column, which is mandatory and the size is 1, if i use varhchar(1), is there any disadvantage?. Thanks in advance.... Followup July 15, 2009 - 10am Central time zone: who is "u"? I asked around, I could not find them. So sorry.short answer: never ever use char, not ever, not for anything.
Converting CLOB to varchar2 November 2, 2009 - 9am Central time zone
Bookmark | Bottom | Top
Reviewer:
Tunde from Liverpool, UK
Hello Tom, I have learnt a lot from this site. Many thanks for your help. I am working on a system to re-create database objects from our production server into our development server. This is aimed at automating creation of database objects. After a lot of troubles with dbms_datapump(API), I settled for database links. I then chose to use the dbms_metadata package to get the metadata because the normal CTAS query would not copy a partitioned table over. The main problem now is that Oracle 10g R2 which we use does not allow copying of clob datatype over the database link. I created my own version to retrieve varchar2, but whenever the metadata is > than the max 32767, it crashes. Please, do you have any advice or suggestions for me. Thanks once again. Followup November 9, 2009 - 12pm Central time zone: ... The main problem now is that Oracle 10g R2 which we use does not allow copyingof clob datatype over the database link. ... excuse me? sure it does. Not sure what issues you ran into with datapump, seems to me that I would have rather spent an extra bit of time with that then doing it all myself but.... clobs are database link friendly to copy with, you'll have to be more specific in your exact approach so we can see what might need to be modified.
Inserting "Long" into "Varchar2" columns November 17, 2009 - 6pm Central time zone
Bookmark | Bottom | Top
Reviewer:
VIKAS from I N D I A
Dear Mr. Kyte, I need to know and understand from you, what is happening here? The scenario is like - insert into PART_TAB_INFO (LAST_PART_HV) (select to_lob(HIGH_VALUE) from DBA_TAB_PARTITIONS); Here column LAST_PART_HV is declared as VRACHAR2(4000). and column DBA_TAB_PARTITIONS(HIGH_VALUE) is of the datatype LONG. while inserting as above the transaction does takes place successfully, but then when I do- select LAST_PART_HV from LAST_PART_HV; I see no values. May I please know why this is happening and how I can get to see the inserted data using the simple "select". Thanks for your valuable help in this regards. Followup November 23, 2009 - 2pm Central time zone: to_lob only works with lobs - not varchar'schange your table to have a lob. and if you want, create a view with dbms_lob.substr() to return just the first 4000 characters.
Inserting "Long" into "Varchar2" columns November 23, 2009 - 11pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Vikas from I N D I A
Thanks a lot for your valuable advice. Can you please point me towards any link / resource (by you) that throws more light on the dbms_lob.substr()function? Also given the above scenario, what approach would you have adapted, considering the fact that "long to varchar2" sconversion would be applied on more than one rows making the use of any function using "rowid" a bit complicated and lenghty. Would like to seek your expert advice on this issue. Thanks and Regards. Followup November 24, 2009 - 10am Central time zone: did you search the documentation - it would have taken 5 seconds to find.http://www.oracle.com/pls/db112/search?remark=quick_search&word=dbms_lob.substr ... more than one rows making the use of any function using "rowid" a bit complicated and lenghty. ... I have no idea what you mean by that. why would it be more complicated for 1,000,000 rows over 1 row. You write the function once and it works on any row. why would it be more lengthy ? I don't know what you mean
long to varcher, but size large than 4000 and less than 32k November 25, 2009 - 3am Central time zone
Bookmark | Bottom | Top
Reviewer:
Iris from Taipei,Taiwan
Hi Tom, I used getlong function to retrieve data. I insert data to table t , but size is larger than 4000 and less than 32k. --create table create table t (x long); --insert data declare data long default lpad('@bemis',5000,'*'); begin insert into t values (data); end; / --retrieve data from table t select X from t where GETLONG('t','x',ROWID) LIKE '%@bemis%'; Result is "no rows selected" Could you help me to solve this problem? Thank you. Followup November 25, 2009 - 1pm Central time zone: this will be hugely inefficient for hundreds or thousands of rows, but:ops$tkyte%ORA9IR2> create table t ( x int, y long ); Table created. ops$tkyte%ORA9IR2> ops$tkyte%ORA9IR2> declare 2 l_max number := 0; 3 begin 4 for c in (select rownum r, text from all_views) 5 loop 6 insert into t values ( c.r, c.text ); 7 l_max := greatest( l_max, length(c.text) ); 8 end loop; 9 dbms_output.put_line( 'max length = ' || l_max ); 10 end; 11 / max length = 14981 PL/SQL procedure successfully completed. ops$tkyte%ORA9IR2> ops$tkyte%ORA9IR2> ops$tkyte%ORA9IR2> create global temporary table gtt ( lob clob ) on commit delete rows; Table created. ops$tkyte%ORA9IR2> ops$tkyte%ORA9IR2> create or replace function get_and_search_long 2 ( p_tname in varchar2, 3 p_cname in varchar2, 4 p_rowid in rowid, 5 p_search_str in varchar2 default NULL ) return varchar2 6 as 7 pragma autonomous_transaction; 8 l_return varchar2(4000); 9 begin 10 execute immediate ' 11 insert into gtt 12 select to_lob(' || dbms_assert.simple_sql_name( p_cname ) || ') 13 from ' || dbms_assert.sql_object_name( p_tname ) || ' 14 where rowid = :p_rowid' using p_rowid; 15 16 begin 17 select substr( lob, 1, 4000 ) 18 into l_return 19 from gtt 20 where lob like p_search_str 21 or p_search_str is null; 22 exception 23 when no_data_found 24 then 25 null; 26 end; 27 commit; 28 return l_return; 29 end; 30 / Function created. ops$tkyte%ORA9IR2> ops$tkyte%ORA9IR2> select * 2 from ( 3 select x, get_and_search_long( 't', 'y', t.rowid, '%ZZ%' ) str 4 from t 5 ) 6 where str is not null 7 / X ---------- STR ------------------------------------------------------------------------------- 1118 select "INST_ID","RECID","STAMP","NAME","TAG","FILE#","RFILE#","CREATION_CHANGE #","CREATION_TIME","RESETLOGS_CHANGE#","RESETLOGS_TIME","INCREMENTAL_LEVEL","CH ECKPOINT_CHANGE#","CHECKPOINT_TIME","ABSOLUTE_FUZZY_CHANGE#","RECOVERY_FUZZY_CH ANGE#","RECOVERY_FUZZY_TIME","ONLINE_FUZZY","BACKUP_FUZZY","MARKED_CORRUPT","ME DIA_CORRUPT","LOGICALLY_CORRUPT","BLOCKS","BLOCK_SIZE","OLDEST_OFFLINE_RANGE"," DELETED","STATUS","COMPLETION_TIME","CONTROLFILE_TYPE","KEEP","KEEP_UNTIL","KEE P_OPTIONS","SCANNED" from gv$datafile_copy 1119 select "INST_ID","FILE#","STATUS","ERROR","FORMAT","RECOVER","FUZZY","CREATION_ CHANGE#","CREATION_TIME","TABLESPACE_NAME","TS#","RFILE#","RESETLOGS_CHANGE#"," RESETLOGS_TIME","CHECKPOINT_CHANGE#","CHECKPOINT_TIME","CHECKPOINT_COUNT","BYTE S","BLOCKS","NAME" from gv$datafile_header 1234 select "INST_ID","RECID","STAMP","DEVICE_TYPE","HANDLE","COMMENTS","MEDIA","MED IA_POOL","TAG","STATUS","DELETED","FILE#","CREATION_CHANGE#","CREATION_TIME","R ESETLOGS_CHANGE#","RESETLOGS_TIME","CHECKPOINT_CHANGE#","CHECKPOINT_TIME","ABSO LUTE_FUZZY_CHANGE#","RECOVERY_FUZZY_CHANGE#","RECOVERY_FUZZY_TIME","INCREMENTAL _LEVEL","ONLINE_FUZZY","BACKUP_FUZZY","BLOCKS","BLOCK_SIZE","OLDEST_OFFLINE_RAN GE","START_TIME","COMPLETION_TIME","ELAPSED_SECONDS","CONTROLFILE_TYPE","KEEP", "KEEP_UNTIL","KEEP_OPTIONS" from gv$proxy_datafile 1092 select "INST_ID","RECID","STAMP","SET_STAMP","SET_COUNT","FILE#","CREATION_CHAN GE#","CREATION_TIME","RESETLOGS_CHANGE#","RESETLOGS_TIME","INCREMENTAL_LEVEL"," INCREMENTAL_CHANGE#","CHECKPOINT_CHANGE#","CHECKPOINT_TIME","ABSOLUTE_FUZZY_CHA NGE#","MARKED_CORRUPT","MEDIA_CORRUPT","LOGICALLY_CORRUPT","DATAFILE_BLOCKS","B LOCKS","BLOCK_SIZE","OLDEST_OFFLINE_RANGE","COMPLETION_TIME","CONTROLFILE_TYPE" from gv$backup_datafile 1827 select "RECID","STAMP","SET_STAMP","SET_COUNT","FILE#","CREATION_CHANGE#","CREA TION_TIME","RESETLOGS_CHANGE#","RESETLOGS_TIME","INCREMENTAL_LEVEL","INCREMENTA L_CHANGE#","CHECKPOINT_CHANGE#","CHECKPOINT_TIME","ABSOLUTE_FUZZY_CHANGE#","MAR KED_CORRUPT","MEDIA_CORRUPT","LOGICALLY_CORRUPT","DATAFILE_BLOCKS","BLOCKS","BL OCK_SIZE","OLDEST_OFFLINE_RANGE","COMPLETION_TIME","CONTROLFILE_TYPE" from v$ba ckup_datafile 1853 select "RECID","STAMP","NAME","TAG","FILE#","RFILE#","CREATION_CHANGE#","CREATI ON_TIME","RESETLOGS_CHANGE#","RESETLOGS_TIME","INCREMENTAL_LEVEL","CHECKPOINT_C HANGE#","CHECKPOINT_TIME","ABSOLUTE_FUZZY_CHANGE#","RECOVERY_FUZZY_CHANGE#","RE COVERY_FUZZY_TIME","ONLINE_FUZZY","BACKUP_FUZZY","MARKED_CORRUPT","MEDIA_CORRUP T","LOGICALLY_CORRUPT","BLOCKS","BLOCK_SIZE","OLDEST_OFFLINE_RANGE","DELETED"," STATUS","COMPLETION_TIME","CONTROLFILE_TYPE","KEEP","KEEP_UNTIL","KEEP_OPTIONS" ,"SCANNED" from v$datafile_copy 1854 select "FILE#","STATUS","ERROR","FORMAT","RECOVER","FUZZY","CREATION_CHANGE#"," CREATION_TIME","TABLESPACE_NAME","TS#","RFILE#","RESETLOGS_CHANGE#","RESETLOGS_ TIME","CHECKPOINT_CHANGE#","CHECKPOINT_TIME","CHECKPOINT_COUNT","BYTES","BLOCKS ","NAME" from v$datafile_header 1972 select "RECID","STAMP","DEVICE_TYPE","HANDLE","COMMENTS","MEDIA","MEDIA_POOL"," TAG","STATUS","DELETED","FILE#","CREATION_CHANGE#","CREATION_TIME","RESETLOGS_C HANGE#","RESETLOGS_TIME","CHECKPOINT_CHANGE#","CHECKPOINT_TIME","ABSOLUTE_FUZZY _CHANGE#","RECOVERY_FUZZY_CHANGE#","RECOVERY_FUZZY_TIME","INCREMENTAL_LEVEL","O NLINE_FUZZY","BACKUP_FUZZY","BLOCKS","BLOCK_SIZE","OLDEST_OFFLINE_RANGE","START _TIME","COMPLETION_TIME","ELAPSED_SECONDS","CONTROLFILE_TYPE","KEEP","KEEP_UNTI L","KEEP_OPTIONS" from v$proxy_datafile 8 rows selected. ops$tkyte%ORA9IR2> ops$tkyte%ORA9IR2> select x, get_and_search_long( 't', 'y', t.rowid ) from t where rownum = 1; X ---------- GET_AND_SEARCH_LONG('T','Y',T.ROWID) ------------------------------------------------------------------------------- 1 select OWNER, TABLE_NAME, TABLESPACE_NAME, CLUSTER_NAME, IOT_NAME, PCT_FREE, PCT_USED, INI_TRANS, MAX_TRANS, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE, FREELISTS, FREELIST_GROUPS, LOGGING, BACKED_UP, NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN, AVG_SPACE_FREELIST_BLOCKS, NUM_FREELIST_BLOCKS, DEGREE, INSTANCES, CACHE, TABLE_LOCK, SAMPLE_SIZE, LAST_ANALYZED, PARTITIONED, IOT_TYPE, NULL, NULL, NULL, TEMPORARY, SECONDARY, NESTED, BUFFER_POOL, ROW_MOVEMENT, GLOBAL_STATS, USER_STATS, DURATION, SKIP_CORRUPT, MONITORING, CLUSTER_OWNER, DEPENDENCIES, COMPRESSION from all_tables union all select OWNER, TABLE_NAME, TABLESPACE_NAME, CLUSTER_NAME, IOT_NAME, PCT_FREE, PCT_USED, INI_TRANS, MAX_TRANS, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE, FREELISTS, FREELIST_GROUPS, LOGGING, BACKED_UP, NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN, AVG_SPACE_FREELIST_BLOCKS, NUM_FREELIST_BLOCKS, DEGREE, INSTANCES, CACHE, TABLE_LOCK, SAMPLE_SIZE, LAST_ANALYZED, PARTITIONED, IOT_TYPE, OBJECT_ID_TYPE, TABLE_TYPE_OWNER, TABLE_TYPE, TEMPORARY, SECONDARY, NESTED, BUFFER_POOL, ROW_MOVEMENT, GLOBAL_STATS, USER_STATS, DURATION, SKIP_CORRUPT, MONITORING, CLUSTER_OWNER, DEPENDENCIES, COMPRESSION from all_object_tables 1 row selected. If you pass null for the search string, it just returns the data found. If you pass a search string, it'll return NULL if there is no match or the first bit of the string otherwise.
Long to Varchar, , but one of data row lenght over 32K November 30, 2009 - 1am Central time zone
Bookmark | Bottom | Top
Reviewer:
Iris from Taipei,Taiwan
Hi, Tom Thanks for your help. There has a trouble for one of data length over 32K. How could I rewrite get_and_search_long function to avoid this data? Thank you. Followup November 30, 2009 - 1am Central time zone: ... There has a trouble for one of data length over 32K. ...huh??? No clue what you mean. My car won't start either - how can you fix it? I have no idea what you mean, this works for things larger than 32k. I created a view that was 290k plus in size create or replace view v as select 1 x /* this is a very long comment ....... repeated over and over ... .... */ from dual the_end; and I copied all_views into a table using: http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551268808763917945 and I can: ops$tkyte%ORA10GR2> select the_whole_view, textLength, substr(get_and_search_long( 'foo', 'thetext', rowid ),1,40) 2 from foo 3 where get_and_search_long( 'foo', 'thetext', rowid, '%the_end%' ) is not null 4 / THE_WHOLE_VIEW TEXTLENGTH ----------------------------------------------------------------- ---------- SUBSTR(GET_AND_SEARCH_LONG('FOO','THETEX ---------------------------------------- OPS$TKYTE.V 291700 select 1 x /* this is a comment a very l
Long to Varchar, , but one of data row lenght over 32K November 30, 2009 - 4am Central time zone
Bookmark | Bottom | Top
Reviewer:
Iris from Taipei,Taiwan
Hi Tom, Sorry for my unclear question. My table t have many data, but one of data length is over 32K. When I use get_and_search_long function, the system will hang for long time. So, I want to know how could I avoid this data when I use get_and_search_long function to retireve y from t. --create table create table t ( x int, y long ); for example: first data length over 32K x y |--------------32767-------| ------------------------------------------- 1 ********************************YYYYY 2 ***************YYYYY******** Followup November 30, 2009 - 2pm Central time zone: Ok, usehttp://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:426318778291 longsubstr and write a function that calls longsubtr, does a LIKE on the return string and returns 0 if it is not like the search string, 1 otherwise.
December 23, 2009 - 12pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Sean from MS
on 10gR2 sql> create table t ( x int, y long ); Table created. sql> sql> sql> declare 2 l_max number := 0; 3 begin 4 for c in (select rownum r, text from all_views) 5 loop 6 insert into t values ( c.r, c.text ); 7 l_max := greatest( l_max, length(c.text) ); 8 end loop; 9 dbms_output.put_line( 'max length = ' || l_max ); 10 end; 11 / declare * ERROR at line 1: ORA-06502: PL/SQL: numeric or value error ORA-06512: at line 4 Why it fails on 10gr2? Followup December 31, 2009 - 12pm Central time zone: likely you have a view >32k .Followup September 20, 2002 - 8am Central time zone: because the documentation says it will? LONGS cannot be used in an insert into. Put more then 32k in the long and it'll fail miserably in plsql as well.
inserting lengthy strings February 26, 2010 - 8am Central time zone
Bookmark | Bottom | Top
Reviewer:
Badri from India
Thanks a ton for this informative forum !! Coming to my doubt, I have a file with the below format. 1,string worth a length of 33000 2,string worth a length of 33000 3,string worth a length of 33000 4,string worth a length of 33000 . . . I am using Oracle 9iR2. My table is create table t(x int,y clob/long)--both I tried. When I tried to load this file into t, found the error Record 1: Rejected - Error on table T, column Y. Field in data file exceeds maximum length I went through this post and found out few points like the best way to insert into a table is by using PL/SQL and that can do it upto a length of 32767. Also when we opt to use DBMS_SQL package we can insert upto a length of 2GB for a Long column. I know you will suggest me to use CLOB instead of LONG. 1.But when using SQL Loader is there anything we can do to insert the above file without errors? 2.If thats not the case what will you suggest as the best way to get this done? Thanks in advance Badri Followup March 1, 2010 - 11am Central time zone: you can load longs with sqlldr - not sure what your issue was, you don't give us anything to look at? We would sort of need the control file and a detailed explanation of how to create the input data set.and you are correct, use clobs - not that a clob would change anything directly here.
Inserting lengthy strings March 2, 2010 - 4am Central time zone
Bookmark | Bottom | Top
Reviewer:
Badri from India
For the above scenario I have created a test data file which closely resembles the data file with which we faced this issue. $ cat test.sh for i in $(seq 10) do echo -n "$i," for j in $(seq 33000) do echo -n "a" done echo done $ sh test.sh > test.txt $ cat test.txt 1,aaaaaaaaaaa....33k times 2,aaaaaaaaaaa....33k times . . . 10,aaaaaaaaaa....33k times $ cat test.ctl load data infile test.txt replace into table T fields terminated by ',' optionally enclosed by '"' (x,y) $ My table is create table t(x int,y clob/long)--both I tried. When I tried to load this file into t, found the error Record 1: Rejected - Error on table T, column Y. Field in data file exceeds maximum length Request you to kindly suggest if anything needs to be changed in the control file. Please let me know if I need to provide anymore info. Thanks in advance Badri Followup March 2, 2010 - 8am Central time zone: load data infile test.txt replace into table T fields terminated by ',' optionally enclosed by '"' (x,y CHAR(n) ) set N to a large enough number - to permit your largest anticipated string to be loaded. The default for n - when you just have y by itself - is char(255).
Inserting large strings March 2, 2010 - 8am Central time zone
Bookmark | Bottom | Top
Reviewer:
Badri from India
Thanks a lot for that solution. Badri
Inserting lengthy strings March 2, 2010 - 10am Central time zone
Bookmark | Bottom | Top
Reviewer:
Badri from India
Hi Tom, Just curious to know, when the legnth of the string is extended to a bigger number.... I got the below error SQL*Loader-510: Physical record in data file (/ap01/corpsub/one.txt) is longer than the maximum(1048576) SQL*Loader-2026: the load was aborted because SQL Loader cannot continue. I changed my data file test.txt as 1,aaaaa...3million times...aaaaa 2,aaaaa...3million times...aaaaa 3,aaaaa...3million times...aaaaa 4,aaaaa...3million times...aaaaa 5,aaaaa...3million times...aaaaa And here is my control file, $cat test.ctl load data infile test.txt replace into table t fields terminated by ',' (x,y char(3000000)) I understand that I am getting this error as this is the max limit for string from a file. I tried to change the readsize=bindsize=3000000 and same error is still persisting. Is there any workaround or solution for this error? Thanks in advance Badri Followup March 2, 2010 - 12pm Central time zone: $ sqlldr SQL*Loader: Release 11.2.0.1.0 - Production on Tue Mar 2 13:43:58 2010 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Usage: SQLLDR keyword=value [,keyword=value,...] Valid Keywords: userid -- ORACLE username/password control -- control file name log -- log file name bad -- bad file name data -- data file name discard -- discard file name discardmax -- number of discards to allow (Default all) skip -- number of logical records to skip (Default 0) load -- number of logical records to load (Default all) errors -- number of errors to allow (Default 50) rows -- number of rows in conventional path bind array or between direct path data saves (Default: Conventional path 64, Direct path all) bindsize -- size of conventional path bind array in bytes (Default 256000) silent -- suppress messages during run (header,feedback,errors,discards,partitions) direct -- use direct path (Default FALSE) parfile -- parameter file: name of file that contains parameter specifications parallel -- do parallel load (Default FALSE) file -- file to allocate extents from skip_unusable_indexes -- disallow/allow unusable indexes or index partitions (Default FALSE) skip_index_maintenance -- do not maintain indexes, mark affected indexes as unusable (Default FALSE) commit_discontinued -- commit loaded rows when load is discontinued (Default FALSE) readsize -- size of read buffer (Default 1048576) external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE (Default NOT_USED) columnarrayrows -- number of rows for direct path column array (Default 5000) streamsize -- size of direct path stream buffer in bytes (Default 256000) multithreading -- use multithreading in direct path resumable -- enable or disable resumable for current session (Default FALSE) resumable_name -- text string to help identify resumable statement resumable_timeout -- wait time (in seconds) for RESUMABLE (Default 7200) date_cache -- size (in entries) of date conversion cache (Default 1000) no_index_errors -- abort load on any index errors (Default FALSE) see readsize
March 2, 2010 - 10am Central time zone
Bookmark | Bottom | Top
Reviewer:
Badri from India
SQL*Loader-510: Physical record in data file test.txt is longer than the maximum(3000000) SQL*Loader-2026: the load was aborted because SQL Loader cannot continue. Thanks Badri Followup March 2, 2010 - 12pm Central time zone: see above
Inserting lengthy strings March 3, 2010 - 4am Central time zone
Bookmark | Bottom | Top
Reviewer:
Badri from India
Hi Tom Thanks a lot for your reply. I referred the readsize default value as 1MB it has a maximum limit of upto 20MB as found in the oracle docs. Based on that information, In my earlier post I was trying to convey that I tried to set readsize=3000000/4000000, bindsize=3000000/4000000 in the sql loader command line and I was seeing the error persisting as below SQL*Loader-510: Physical record in data file test.txt is longer than the maximum(3000000/4000000) SQL*Loader-2026: the load was aborted because SQL Loader cannot continue. Is this correct way to go or we need to change anything else, Could you please help. Thanks in advance Badri Followup March 3, 2010 - 7am Central time zone: you see the correlation between setting this value and the error message? You need to increase it - if you want to load it.
Exactly what Oracle should have as a standard function April 21, 2010 - 6am Central time zone
Bookmark | Bottom | Top
Reviewer:
H James from UK
This works brilliantly for our Users as we can output a summary of thier words without having to look at the note screens in full.
Long to Varcha2 conversion May 20, 2010 - 3pm Central time zone
Bookmark | Bottom | Top
Reviewer:
VL Doran from Chelmsford, MA USA
The convert function (long to varchar) works well for my case. Very much appreciated! You rule!
use of long in oracle sys tables October 26, 2010 - 1am Central time zone
Bookmark | Bottom | Top
Reviewer:
Rajesh from kolkata, India
In many of the posts you have advocated the use of clob in place of long. fair enough. But why is it that in many of the sys tables oracle have the column data types as long - like the base tables for the views ALL_TAB_COLUMNS, DBA_TAB_COLS, DBA_TRIGGERS, DBA_VIEWS etc. Thanks in Advance, Rajesh Followup October 26, 2010 - 7pm Central time zone: because they are legacy tables. because changing the dictionary would be *extremely* painful to many thousands of 3rd party applications (and our own).for new things going forward, use lobs.
October 27, 2010 - 12pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Alexander
Tell me about it, SQLServer does that all the time for their major releases, it's asinine.
Name of the book January 9, 2011 - 7pm Central time zone
Bookmark | Bottom | Top
Reviewer:
Ruben from Monterrey, Mexico
What is the name of the book where you published the log_help package? Followup January 10, 2011 - 8am Central time zone: chapter 12, Expert Oracle Database Architecture (see homepage).The code is available for download from the Apress website |