Rocket99 Tutorials

Tags

 

DBA Tasks

Unix Script

 

The minimum size of the master device and master database depends on the page size defined.

The minimum master device sizes are:

• 2K page size – 24MB
• 4K page size – 45MB
• 8K page size – 89MB
• 16K page size – 177MB

The minimum master database sizes are:

• 2K page size – 13MB
• 4K page size – 26MB
• 8K page size – 52MB
• 16K page size – 104MB


Post-installation check

Do not leave master as the default device, database creates without a device specification will be created in master.

1> sp_diskdefault master, defaultoff

2> go

(return status = 0)

1> sp_diskdefault device26, defaulton

2> go

Routine memory check

dbcc traceon(3604)

go

dbcc memusage

go

dbcc traceoff(3604)

go

/* sample post-install config, for 12.5 ASE */

– send results, no wait

sp_configure ‘tcp no delay’,1

go

– allocate 1.2 gb to sybase

sp_configure ‘max memory’,600000

go

– allocate at sybase boot time

sp_configure ‘lock shared memory’,1

go

– additional data cache

sp_cacheconfig ‘default data cache’,’600M’

go

– additional procedure cache

sp_cacheconfig ‘procedure cache’,’50M’

go

– cache for tempdb

sp_cacheconfig ‘cache01′,’80M’

go

/* reboot ASE */

– Additional config, for server w/several CPUs

sp_configure “number of user connections”,500

go

sp_configure “number of worker processes”,100

go

sp_configure “max parallel degree”,3

go

sp_configure “max scan parallel degree”,3

go

sp_configure “global cache partition number”,2

go

sp_configure “number of locks”,50000

go

sp_configure “number of open objects”,50000

go

sp_configure “number of open databases”,32

go

sp_configure “number of devices”,50

go

/* reboot ASE */

– Additional config, for system using text/blob data

sp_configure ‘additional network memory’,4096

go

sp_configure ‘max network packet size’,2048

go

sp_configure ‘default network packet size’,1024

go

sp_configure ‘heap memory per user’,4096

go

/*

UNIX Sybase >= 11.9, allow device buffering in O/S;

- improves performance

- increases chance of device corruption during failure

*/

sp_deviceattr “device21″,”dsync”,”false”

go

/* LINUX:  may need to set shared memory */

echo 134217728 > /proc/sys/kernel/shmmax

echo 999999999 > /proc/sys/kernel/shmmax

Extend tempdb: size should be about 20% of the main production database’s size.

/* configure tempdb to 20 mb … this command adds an additional

18 meg to the 2 mb already present on the master device */

1> alter database tempdb on device26 = 18

2> go

/* Add local server name */

sp_addserver  snoopy, local

go

Starting the Sybase process

Data server:

nohup /apps/sybase/install/startserver \

-f /apps/sybase/install/RUN_sybase1 >> startup.log &

Backup server:

nohup /apps/sybase/install/startserver \

-f /apps/sybase/install/RUN_SYB_BACKUP & >> startup.log


Device initialization

/* create a 2 gig device */

1> disk init name = ‘device19′,

2>      physname  = ‘/dev/md/rdsk/d19′,

3>      vdevno    = 6,

4>      size      = 1024000

5> go


Database creation

/* create a 1 gig database, with a 50 mb transaction log */

/* for load clause allows quick creation when dump is available */

1> create database

2>    dbname

3>    on device18 = 1000

4>    log on device8 = 50

5>    for load

6> go

CREATE DATABASE: allocating 512000 pages on disk ‘device18′

CREATE DATABASE: allocating 25600 pages on disk ‘device8′

/* change the database owner */

use dbname

go

1> sp_changedbowner ‘jmith’

2> go

/* set up automatic log truncate, for development mode */

use master

go

sp_dboption ‘dbname’,’trunc log on chkpt’,true

go


A backup routine

use master

go

sp_dboption dbname, “single user”, true

go

use dbname

go

checkpoint

go

dbcc checkdb (dname,skip_ncindex)

go

dbcc checkcatalog

go

dbcc checkalloc

go

use master

go

sp_dboption dbname, “single user”, false

go

use dbname

go

checkpoint

go

dump tran dbname to device1

go

dump database dbname to device1

go


Striping Dump Devices

Sybase (prior to version 12) has a 2 GB dump file size limitation for most platforms. Getting around this is easy – simply stripe the dumps across multiple files or devices. The examples below use file names instead of device names.

dump database hr_db to ‘/usr2/dumps/remote/db_hr05121318.dmp’

stripe on ‘/usr2/dumps/remote/db_hr_S1_05121318.dmp’

stripe on ‘/usr2/dumps/remote/db_hr_S2_05121318.dmp’

go

load database hr_db from  ‘/usr2/dumps/remote/db_hr05121318.dmp’

stripe on ‘/usr2/dumps/remote/db_hr_S1_05121318.dmp’

stripe on ‘/usr2/dumps/remote/db_hr_S2_05121318.dmp’

go

online database hr_db

go


Moving the transaction log to another device

1> alter database dbname log on device19 = 10

1> sp_logdevice dbname, device19

The last-chance threshold for database dbname is now 1232 pages.

… sql inserts, to fill old log segment …

1> dump tran dbname with truncate_only

1> sp_helplog dbname

2> go

In database ‘dbname’, the log starts on device ‘device19′.

(return status = 0)

1>


Adding a segment to a database

1> use dbname

2> go

1> sp_addsegment ‘idx_seg1′,’dbname’,’device18′

2> go

DBCC execution completed. If DBCC printed error messages, contact a user with

System Administrator (SA) role.

Segment created.

1> use dbname

2> go

1> sp_dropsegment ‘system’,’dbname’,’device18′

2> go

DBCC execution completed. If DBCC printed error messages, contact a user with

System Administrator (SA) role.

Segment reference to device dropped.

(return status = 0)

1> sp_dropsegment ‘default’,’dbname’,’device18′

2> go

DBCC execution completed. If DBCC printed error messages, contact a user with

System Administrator (SA) role.

Segment reference to device dropped.

(return status = 0)


Setting the thresholds

Threshold settings allow customized procedures to be run when database segments approach a defined capacity. The “last chance threshold” is set by default, to execute sp_thresholdaction within the current database, when a segment reaches 95% of capacity. The procedure sp_thresholdaction needs to be created by the DBA. Here is a sample:

create proc sp_thresholdaction (

@dbname varchar(30),

@segmentname varchar(30),

@space_left int,

@status int )  as

declare @msg    varchar(80),

@date1  datetime,

@fname  varchar(80),

@fdate  varchar(20),

@fpath  varchar(40)

select @fpath = ‘/usr/dumps/logs/’

select @date1 = getdate()

select @fdate =

convert(varchar(2),datepart(MM,@date1)) +

convert(varchar(2),datepart(DD,@date1)) +

convert(varchar(2),datepart(HH,@date1)) +

convert(varchar(2),datepart(MI,@date1))

select @fname = @fpath + ‘log_’ + @dbname + @fdate + ‘.dmp’

select @msg = ‘***!! Last Chance Threshold reached, for ‘ + @dbname + ‘(‘ + @segmentname + ‘)’

print @msg

if @segmentname = ‘logsegment’

dump tran @dbname to @fname

return

Other threshold levels can be created, for specific segments. They can be set up to print informational messages to the error log, as a forewarning to the DBA. Here’s a sample which reflects the command syntax:

1> sp_addthreshold dbname,logsegment,400,’proc_log_threshold’

2> go

Adding threshold for segment ‘logsegment’ at ‘400’ pages.

DBCC execution completed. If DBCC printed error messages, contact a user with

System Administrator (SA) role.

(return status = 0)


Configuring the cache

Important: for ASE 12.5, the default data cache MUST be configured !

use master

go

sp_cacheconfig ‘cache01′,’4M’

go

Entry in config file looks like this:

[Named Cache:dev_cache1]

cache size = 4M

cache status = mixed cache

Next, database objects need to be bound to the cache

use dev_main_db

go

sp_bindcache ‘dev_cache01′,’dev_main_db’,’customer’

go

sp_helpcache

go

/* see sample post-install config above for more examples */


Security Tasks

/* create a super user, along with database ownership */

use silvermaster

go

sp_addlogin ‘silveruser’,’silver’,’silvermaster’

go

sp_role ‘grant’,’sa_role’,’silveruser’

go

sp_changedbowner silveruser

go

/* create a developer profile */

sp_addlogin ‘jsmith’,’yankees’,’silvermaster’

go

use silvermaster

go

sp_addalias ‘jsmith’,’dbo’

go

/* change jsmith password, note how SA/SSO pwd is required here */

sp_password ‘sa_pwd’,’dodgers’,’jsmith’

go


Running SQL within a script

This script accepts a sybase command as a parameter, and executes it.

#!/usr/bin/ksh

#——————————————————

#  File: sybexec

#  Process Sybase command, output goes to std output

#  Parameter: SQL command, in quotes

#

#  Sample call:  sybexec “sp_helpdb billing_db”

#——————————————————

intfile=/apps/sybase/interfaces

eval /apps/sybase/bin/isql -Sserver -I$intfile -Ujsmith -Pyankees << finis

$1

go

finis


Apply a transaction dump

This script accepts a transaction file and dbname as parameters, and applies the data

#!/usr/bin/ksh

#————————————-

#  Sybase database loader

#  Parms:  database, log dump file

#————————————-

if test $# -lt 2

then

echo ” “

echo “usage:”

echo “——“

echo “syb_applylog  “

echo ” “

echo ” “

exit

fi

if test ! -f $2 ; then

echo ” “

echo “Invalid dump file: “

echo $2

echo ” “

exit

fi

echo “———————————————–“

echo “`date`”

echo “**** Loading transaction dump file …” $2

eval /apps/sybase/bin/isql \

-SFocal1 -I/apps/sybase/interfaces -Ujsmith -Pyankees << finis

load transaction $1 from ‘$2′

go

finis

echo “———————————————–“

echo “`date`”

echo ‘**** Load complete.’


Apply multiple transaction dumps

This script accepts a directory and dbname as parameters, and applies the dumps in the directory, in filename order

#!/usr/bin/ksh

#——————————————————-

#  Log File Applier

#  Parms:  database name, dump directory containing logs

#——————————————————-

if test $# -lt 2 ; then

echo ” “

echo “usage: “

echo “syb_applylogs dbname sourcedir”

echo ” “

exit

fi

if test -d $2 ; then

mstatus=”OK”

else

echo ” “

echo ” Invalid path: “

echo $2

echo ” “

exit

fi

for fname in $2/log*.dmp ; do

echo $fname

if test -f $fname ;  then

/usr2/dumps/scripts/syb_applylog $1 $fname

fi

done


Database maintenance procedure

This stored proc performs transaction dumps, or database dumps for a specified database. It is used in the script below.

use master

go

create proc sp_syb_maint (@dbname varchar(30),

@fpath  varchar(50),

@mode   varchar(15)) as

declare @fname1  varchar(50),

@fname2  varchar(50),

@fdate   varchar(12),

@fdate1  varchar(12),

@fdate2  varchar(12),

@date1   datetime,

@msg     varchar(80),

@char1   char(1),

@dbprefix char(3)

if (@mode = ‘dbcc’)

return

select @fpath = rtrim(@fpath)

select @char1 = right(@fpath,1)

if (@char1 != char(47))

select @fpath = @fpath + char(47)

select @date1 = getdate()

select @fdate1 = convert(varchar(12),@date1,112),

@fdate2 = convert(varchar(12),@date1,108)

select @fdate =

substring(@fdate1,5,4) +

substring(@fdate2,1,2) +

substring(@fdate2,4,2)

select @dbprefix = substring(@dbname,1,3)

select @fname1 = @fpath + ‘log_’ + @dbprefix + @fdate + ‘.dmp’

select @fname2 = @fpath + ‘db_’  + @dbprefix + @fdate + ‘.dmp’

if ((@mode = ‘dump’) or (@mode = ‘tran_only’)) and

charindex(@dbname,’master-model-tempdb-sybsystemprocs’)=0

begin

select @msg = ‘*** Dumping transaction log to ‘ + @fname1

print  @msg

dump tran @dbname to @fname1

end

if (@mode = ‘dump’) and

charindex(@dbname,’model-tempdb-sybsystemprocs’)=0

begin

select @msg = ‘*** Dumping database to ‘ + @fname2

print  @msg

dump database @dbname to @fname2

end

return


Database maintenance script

This script performs DBCCs, transaction dumps, or database dumps for a specified database.

#!/usr/bin/ksh

#————————————-

#  syb_maint

#

#  Sybase database maintenance: perform DBCCs / log backups / db backups

#

#  Parms:  database, dump dir, mode (dump | tran_only | dbcc)

#

#  Step 1:  DBCCs    (dbcc mode only)

#  Step 2:  Backup

#

#  Output is routed to backup.log & dbcc.log

#————————————-

if test $# -lt 3

then

echo ” “

echo “usage:”

echo “——“

echo “syb_maint   “

echo ” “

echo ” “

exit

fi

if test ! -d $2 ; then

echo ” “

echo ” Invalid path: “

echo $2

echo ” “

exit

fi

if test ! -f /usr2/dumps/scripts/contact.txt ; then

echo ” contact.txt file not found “

exit

fi

contact=`cat /usr2/dumps/scripts/contact.txt`

logfile1=/usr2/dumps/cronlogs/syb_maint/dbcc.log

logfile2=/usr2/dumps/cronlogs/syb_maint/backup.log

if test -f /tmp/syb_stop ; then

echo ” ***** db stop detected ***** ” >> $logfile1

exit

fi

echo “=============================” > /dev/null

echo $1                              > /dev/null

echo “=============================” > /dev/null

if test “$3″ = “dbcc” ; then

echo “Running dbcc step …” > /dev/null

eval /apps/sybase/bin/isql -Sserver -I/apps/sybase/interfaces \

-Ujsmith -Pyankees  << finis >> $logfile1

print ‘***** DBCC $1 **************************************’

go

use master

go

sp_dboption $1, “single user”, true

go

use $1

go

dbcc checkdb ($1,skip_ncindex)

go

dbcc checkcatalog

go

dbcc checkalloc

go

checkpoint

go

use master

go

sp_dboption $1, “single user”, false

go

quit

finis

# check output

if egrep “error|corrupt” $logfile1 | egrep -v “printed|TABLE|Checking” > /dev/null

then

echo “*** Errors found in DBCC log file.”

rmail $contact@focal.com << endmsg

*** Errors found in DBCC log file

.

endmsg

fi

fi

echo “Running dump step …” > /dev/null

eval /apps/sybase/bin/isql -Sserver -I/apps/sybase/interfaces \

-Ujsmith -Pyankees  << finis2 >> $logfile2

print ‘***** DUMP $1 **************************************’

go

use master

go

exec sp_syb_maint $1, ‘$2′, ‘$3′

go

quit

finis2

if grep “error|corrupt” $logfile2 > /dev/null

then

echo “*** Errors found in backup log file”

rmail $contact@mycompany.com << endmsg2

*** Errors found in backup log file

.

endmsg2

fi

echo “Sybase maintenance complete” > /dev/null


BCP data to/from a flat file

/* export */

/apps/sybase/bin/bcp dbname..tablename out /data/data01.bcp \

-c -Ujsmith -Pyankees -Sserver -I/apps/sybase/interfaces

/* import */

/apps/sybase/bin/bcp dbname..tablename in /data/data01.bcp \

-c -Ujsmith -Pyankees -Sserver -I/apps/sybase/interfaces

/* BCP table “employee” to file named test1.txt */

/apps/sybase/bin/bcp dev_db..employee out test1.txt -c -t \\t -r \\n

-Sserver -Ujsmith -I/apps/sybase/interfaces

/* BCP file named test2.txt into table employee */

/apps/sybase/bin/bcp dev_db..employee in test2.txt -c -t \\t -r \\n

-Sserver -Ujsmith -I/apps/sybase/interfaces

Parms for each command:

database

table

in/out

character format specified (-c)

tab is the field separator (-t \\t)

newline is the record separator (-r \\n)

server

user

interfaces


Server configuration

One of the best enhancements included in System 11 is the addition of the editable configuration text file. This allows you to change the server’s configuration using any text editor, and makes switching configuration files a snap.Notable configuration parameters:

Total memory – memory allocated to SQL Server in 2K pages. This memory includes all memory used by the server process, including: data cache, procedure cache, program memory, and connection memory.

Procedure cache – percent of cache allocated for stored procs. Decrease this value if stored procedures are not use frequently by your application. The default is 30.

User connections – user connection take about 60K each. Set this parameter sparingly, as it takes more memory than most of the other config values.

Sort order – set this parm as soon as possible. A sort order id of 50 is default, which is the case-sensitive type. Recommended: 52, this setting is not case-sensitive.


Dealing with a Corrupted Database

Hardware failures can result in databases that are corrupt and will not open upon restart of the server. In some cases the database is marked suspect, and then cannot be opened. The best way to deal with a database in this state is to nuke it and reload it from a backup. Here’s a code snippet which will force the drop to occur, when drop database fails. 

/* note:  X=the dbid of the database (from sysdatabases) */

use master

go

sp_configure “allow updates”,1

go

begin tran

go

update sysdatabases set status = 320 where dbid = X

go

/* always make sure the status has been changed to 320 */

select dbid, status from sysdatabases where dbid = X

go

commit tran

go

sp_configure ‘allow updates’, 0

go

checkpoint

go

/* recycle the server */

dbcc dbrepair (database_name, dropdb)

go

/* now, recycle the server and rebuild the database */

Dealing with a Server Failure

There are rare instances when the server crashes down so hard that it cannot be started again. In the synopsis that follows, the crash was due to extremely high database activity after the transaction log filled up – making it impossbile to clear. The server was brought down, and could not be restarted. The trick here was to bring up the server in “non-recovery” mode, and then clear the transaction log using some tricks from the Sybase support team.

/* Note:  dbname = the database name, X = the dbid */

/* In the runserver file, add the following flags: */

-m

-T3608   (recover master and nothing else)

-or-

-T3607   (no recovery)

/* Now, recycle the server */

Then, in isql:

sp_configure ‘allow updates’,1

go

update sysdatabases set status=-32768

where name = ‘dbname’

go

select config_admin(1,102,1,0,null,null)

go

update sysdatabases set status=0 where dbid=X

go

/* recycle again, things should be OK */

DBCC Notes

DBCCs should be run on a regular basis to check for allocation errors, which occur due to hardware issues (in most cases). For 24×7 needs, DBCCs can be run on a separate server that is loaded from a current database dump. 

Here is a script which will perform the basic DBCC functions

use master

go

sp_dboption invoice_db,’single user’, true

go

use invoice_db

go

checkpoint

go

use invoice_db

go

select db_name()

go

checkpoint

go

dbcc checkdb

go

dbcc checkalloc

go

dbcc checkcatalog

go

use master

go

sp_dboption invoice_db,’single user’,false

go

use invoice_db

go

checkpoint

go

Table or index allocation errors can be fixed by simply dropping the object and recreating it (using BCP as needed). See below for other repair methods.

Here is a script which will fix many table allocation errors

use invoice_db

go

dbcc tablealloc(tablename, full, fix)

go

Here is a script which will fix most page allocation errors

use master

go

sp_dboption invoice_db,’single user’, true

go

use invoice_db

go

checkpoint

go

use invoice_db

go

select db_name()

go

checkpoint

go

dbcc checkalloc(invoice_db,fix)

go

use master

go

sp_dboption invoice_db,’single user’,false

go

use invoice_db

go

checkpoint

go


Intferace Files, IP and Port Translation

SUN installations have interface file entries that appear cryptic – see below for a dissection of a typical entry.

Interfaces file fragment:

\x0002 08fc a825d0b5 0000000000000000

Breakdown appears below:

168.37.208.181,2300

a8   168

25   37

d0   208

b5   181

Explained:

0002

Denotes that this entry is a TLI “address family”. This is always at

the start of a TLI address. TCP/IP is family 2. Depending on the

network vendor and the byte order of the machine, this works out as a

hexadecimal “0002” (most common) or “0200” (the format is

dependent on whether the machine is “little endian” or “big endian”).

Take a look at how your current interfaces file is structured to

confirm your address family number format, and make a change to

the variable ADDRESS_FAMILY in tli_mapper accordingly.

1E6C

This is the hexadecimal equivalent of the port number. In this

example, the hexadecimal address 1E6C translates to the decimal

address 7788.

9D0E7D24

This 8-digit hexadecimal address is the translation of the decimal IP

address equivalent. The address is formed by translating each decimal

portion of the IP address, separated by the period, to its hexidecimal

equivalent(minus the periods). Single digits are entered with a leading

zero.

9D  157

0E  14

7D  125

24  36


Setting Process Priorities

With Sybase 11.9.5 and above, you can set the run class for processes to LOW, MEDIUM, or HIGH. 

Here’s a sample call which sets the priority for a specific spid:

sp_setpsexe 14, ‘priority’, ‘LOW’

Here are sample calls which define a class, and sets the priority for a login, forever:

sp_addexeclass ‘rpt_class’,LOW,null,’ANYENGINE’

sp_bindexeclass ‘bjenner’,’lg’,’null’,’rpt_class’

SYBASE DBA FROM SYBASE WIKI

Sybase Interview Questions Part-1

1) What are the system roles and status by default?

Sa_role, sso_role and oper_role are system roles. They are on by default.

Below Data Cache configuration can improve the performance:

• Configure named data caches to be large enough to hold critical tables and indexes. This keeps other server activity from contending for cache space and speeds queries using these tables, since the needed pages are always found in cache. Can configure these caches to use the relaxed LRU replacement policy, reducing the cache overhead.
• To increase concurrency, bind a hot table to one cache and the indexes on the table to other caches.

 

• Create a named data cache large enough to hold the hot pages of a table where a high percentage of the queries reference only a portion of the table.

For example, if a table contains data for a year, but 75% of the queries reference data from the most recent month (about 8% of the table), configuring a cache of about 10% of the table size provides room to keep the most frequently used pages in cache and leaves some space for the less frequently used pages

 

• Assign tables or databases used in decision-support systems (DSS) to specific caches with large I/O configured.
This keeps DSS applications from contending for cache space with OLTP applications. DSS applications typically access large numbers of sequential pages, and OLTP applications typically access relatively few random pages.

 

• Bind tempdb to its own cache to keep it from contending with other user processes. Proper sizing of the tempdb cache can keep most tempdb activity in memory for many applications. If this cache is large enough, tempdb
activity can avoid performing I/O.

 

• Bind text pages to named caches to improve the performance on text access.
• Bind a database’s log to a cache, again reducing contention for cache space and access to the cache.

As procedure cache stored the query plan of stored procedures similarly statement cache saves SQL text and SQL query plan previously generated for ad hoc SQL statements, enables ASE to avoid recompiling incoming SQL that matches a previously cached statement.

 

Basically Statement Cache is a part of Procedure Cache, when enabled the statement cache reserves a portion of procedure cache.

ASE doesn’t contain any table which stores the Query Plans of stored procedure. Instead, Query Plans gets stored in procedure cache that is the part of max memory.

ASE maintains MRU/LRU (most recently used/least recently used) algorithm. Stored procedures generally preferred over separate SQL statements because when users execute stored procedure, Adaptive server search procedure cache for existing query plan. If it is available then execution begins.

If Query plan is not available or all copies are in use, if multiple users are executing same stored procedure at a time then multiple copies of query plan will be available in procedure cache until size of cache is supporting, then query tree for the procedure is read from the sysprocedures table  .  Then query tree is then optimized, based on the parameters passed to the procedures and converted into query plan and then execution begins.

For Adaptive Server, devices provide a logical map of a database to physical storage, while segments provide a logical map of database objects to devices.

Adaptive Server keeps track of the various pieces of each database in master.dbo.sysusages. Each entry in sysusages describes one fragment of a database. Fragments are a contiguous group of logical pages, all on the same
device, that permit storage for the same group of segments. Fragments are also known as “disk pieces.”

Creation of Indexes:

You can create the index either of two ways:

  • With Create index command
  • By specifying Integrity Constraints like Primary Key and Unique Key in create table command.

Integrity constraints (Primary and Unique Keys) have following restrictions for the indexes:

  • You cannot create non unique indexes.
  • You cannot set various setting provided by create index command like ignore_dup_key,ignore_dup_row etc.
  • You cannot drop these indexes without alter table command.

When we specify the Primary Key in create table command, it creates Unique Cluster Index and unique Key creates unique non clustered index on the column mentioned in keys.

Summarize as -

  • Primary Key in create table command ==> Creates Unique Clustered Index
  • Unique Key in create table command  ==> Creates Unique non clustered Index
  • If neither the clustered nor the nonclustered keyword is used ==> ASE will create non clustered indexes.
  • If unique keyword is not used in create index command ==> ASE will create non unique indexes.

 

Create Index Command Syntax:

create [unique] [clustered | nonclustered] index index_name
on [[database.]owner.]table_name
(column_expression [asc | desc]
[, column_expression [asc | desc]]…)
[with {fillfactor = pct,
max_rows_per_page = num_rows,
reservepagegap = num_pages,
consumers = x, ignore_dup_key, sorted_data,
[ignore_dup_row | allow_dup_row],
statistics using num_steps values}]
[on segment_name]
[index_partition_clause]

Before executing create index, turn on select into: sp_dboption,’select into’, true

The simplest form of create index is: Create index index_name on table_name (column_name)

 

Viewing Indexes:

  • Using sp_helpindex we can view indexes of a table. E.g. sp_helpindex ‘tablename’
  • sp_statistics also  returns a list of indexes on a table. E.g. sp_statistics ‘tablename’
  • In addition, if you follow the table name with “1”, sp_spaceused reports the amount of space used by a table and its indexes. E.g. sp_spaceused ‘tablename’,1

Dropping indexes

  • The drop index command removes an index from the database.
  • Only the owner of an index can drop it. drop index permission cannot be transferred to other users. The drop index command cannot be used on any of the system tables in themaster database or in the user database.
  •  You cannot drop indexes using drop index command which were created using Integrity constraint. To drop the same indexes you should use alter table command.

Index Option ( for Create Index Command):

 i)ignore_dup_key  :

  • This option is only for unique clustered and non clustered indexes.
  • If you try to insert a duplicate value into a column that has a unique index, the command is canceled. You can avoid this situation by including the ignore_dup_key option with a unique index. Your command would be successful and it will ignore that key value (It means, finally no insert on the table).
  • You cannot create a unique index on a column that already includes duplicate values, whether or not ignore_dup_key is set.

ii) ignore_dup_row and allow_dup_row:

  • These options are only for the non unique clustered index.
  • These options are not relevant when creating a nonclustered index. Since an Adaptive Server nonclustered index attaches a unique row identification number internally, duplicate rows are never an issue—even for identical data values.
  • A nonunique clustered index allows duplicate keys, but does not allow duplicate rows unless you specify allow_dup_row. If allow_dup_row is set, you can create a new nonunique, clustered index on a table that includes duplicate rows, and you can  insert or update duplicate rows.
  • The ignore_dup_row option eliminates duplicates from a batch of data. When you enter a duplicate row, Adaptive Server ignores that row and cancels that particular insert or update with an informational error message.
  • If a table has duplicate rows and you are creating non unique clustered index with ignore_dup_key, it will delete all the duplicate rows from the table.

iii)Sorted Data:

  • The sorted_data option of create index speeds index creation when the data in the table is already in sorted order. sorted_data speeds indexing only for clustered indexes or unique nonclustered indexes.
  • Creating a nonunique nonclustered index is, however, successful, unless there are rows with duplicate keys. If there are rows with duplicate keys, an error message appears and the command is aborted.

Source: Sybooks

 

Below Data Cache configuration can improve the performance:

• Configure named data caches to be large enough to hold critical tables and indexes. This keeps other server activity from contending for cache space and speeds queries using these tables, since the needed pages are always found in cache. Can configure these caches to use the relaxed LRU replacement policy, reducing the cache overhead.
• To increase concurrency, bind a hot table to one cache and the indexes on the table to other caches.

 

• Create a named data cache large enough to hold the hot pages of a table where a high percentage of the queries reference only a portion of the table.

For example, if a table contains data for a year, but 75% of the queries reference data from the most recent month (about 8% of the table), configuring a cache of about 10% of the table size provides room to keep the most frequently used pages in cache and leaves some space for the less frequently used pages

 

• Assign tables or databases used in decision-support systems (DSS) to specific caches with large I/O configured.
This keeps DSS applications from contending for cache space with OLTP applications. DSS applications typically access large numbers of sequential pages, and OLTP applications typically access relatively few random pages.

 

• Bind tempdb to its own cache to keep it from contending with other user processes. Proper sizing of the tempdb cache can keep most tempdb activity in memory for many applications. If this cache is large enough, tempdb
activity can avoid performing I/O.

 

• Bind text pages to named caches to improve the performance on text access.
• Bind a database’s log to a cache, again reducing contention for cache space and access to the cache.

Categories:ASETags:

April 26th, 2012Anurag DubeyNo comments

As procedure cache stored the query plan of stored procedures similarly statement cache saves SQL text and SQL query plan previously generated for ad hoc SQL statements, enables ASE to avoid recompiling incoming SQL that matches a previously cached statement.

 

Basically Statement Cache is a part of Procedure Cache, when enabled the statement cache reserves a portion of procedure cache.

Categories:ASETags:

April 26th, 2012Anurag Dubey1 comment

ASE doesn’t contain any table which stores the Query Plans of stored procedure. Instead, Query Plans gets stored in procedure cache that is the part of max memory.

ASE maintains MRU/LRU (most recently used/least recently used) algorithm. Stored procedures generally preferred over separate SQL statements because when users execute stored procedure, Adaptive server search procedure cache for existing query plan. If it is available then execution begins.

If Query plan is not available or all copies are in use, if multiple users are executing same stored procedure at a time then multiple copies of query plan will be available in procedure cache until size of cache is supporting, then query tree for the procedure is read from the sysprocedures table  .  Then query tree is then optimized, based on the parameters passed to the procedures and converted into query plan and then execution begins.

 

When you insert data into an allpages-locked heap table, the data row is always added to the last page of the table. If there is no clustered index on a table, and the table is not partitioned, the sysindexes.root entry for the heap table stores a pointer to the last page of the heap to locate the page
where the data needs to be inserted.

If the last page is full, a new page is allocated in the current extent and linked onto the chain. If the extent is full, Adaptive Server looks for empty pages on other extents being used by the table. If no pages are available, a new extent is allocated to the table.

Conflicts during heap inserts
—————————-
If many users are trying to insert into an allpages-locked heap table at the same time, each insert must
wait for the preceding transaction to complete.

This problem of last-page conflicts on heaps is true for:
• Single row inserts using insert
• Multiple row inserts using select into or insert…select, or several insert statements in a batch
• Bulk copy into the table

Some workarounds for last-page conflicts on heaps include:
———————————————————
• Switching to datapages or datarows locking
• Creating a clustered index that directs the inserts to different pages
• Partitioning the table, which creates multiple insert points for the table, giving you multiple “last pages” in an allpages-locked table

October 1st, 2011Anurag DubeyNo comments

Question: Should I create an unique index on a column when uniqueness of a column is known?

Suggestion: It is better to create an unique index because it helps the sybase to use some extra optimization when the index is used.

Categories:ASETags:

October 1st, 2011Anurag DubeyNo comments

Misconception: sybase query optimizer can not consider index on temporary table if it is created and used in same batch or procedure.

Fact: When an index is created on a temporary table within a proc, sybase performs a runtime recompilation of the proc in order to make use of the index (if appropriate).

Run the sproc with option “set showplan on” and “with recompile” then we can see the multiple copies of sproc if sproc is having index creation on temp tables after temp table population.

September 29th, 2011Anurag DubeyNo comments

Hi,

I would like to touch upon few topics, which are acting big subject areas in oracle, in Sybase. These below mentioned topic needs thorough discussion.

1. Exception Handling in Sybase –> In Sybase I have come across with only one command called “raiserror” for throwing an error, but what about throwing and catching exceptions, like we have in orcale.

2. User functions in Sybase –> Can we create custom functions like getdate(), db_name() etc?

3. Query Plan of functions –> Does query plan gets generated every time whenever we call functions by passing different arguments?

Please share your thoughts and knowledge on these topics.

Enjoy learning


2) What are the daily activities as a Sybase DBA?

check the status of the server (using ps –eaf |grep servername) or with showserver at OS level or try to login
IF this fails,We need to check the error log for any suspesious error if any errors there in error log we need to investigate
if there is no errors in errorlog
We need to start the server
check the size the file system (df –k).
check the status of the database (sp_helpdb)
check the schedule cron job
check whether any process is blocked (sp_who and sp_lock)

backups / load database

checking the errorlog

3) What are the default databases in ASE-12_5?

master, model, tempdb, sybsystemprocs, sysbstemdb
optional db’s
pubs2,pubs3, sybsecurity, audit, dbccdb

What is a bind cache?

When you install Adaptive Server it has single default data cache with a 2K memory pool one cache partition and a single spinlock.

To improve performance you can add data caches and bind databases or database objects to them.

use “sp_bindcache” to bind databases or database objects to a cache.

 

What is defncopy and its usage.

Copies definitions for specified views, rules, defaults, triggers, procedures, or reports from a database to an operating system file or from an operating system file to a database.

What are the mandatory options required for BCP command utility?

ServerName UserName Password InputFile FormatFile

bcp -S ServerName – U UserName -P Password -f FormatFile < InputFile 

FormatFile is also optional. It is created when bcp is run for the 1st time.

usage: dataserver 

valid options are:

-a caps_file – path to CAPs directive file

-b [size_spec] – master device size specifier

-c config_file – config file for server

-D [size_spec] – default database size specifier

-d master_dev – master device name

-e [error_file] – error log file name default: ‘errorlog’

-f f[orcebuild] – force initialization of a device or database

-G logserv_name – event log server name

-g – turn off event logging

-H – start HA server

-h – print this help message then exit

-i interface_dir – interface file directory

-K keytab_file – keytab file name

-k s_principal – server principal name

-L [config_file] – connectivity configuration file; default: ‘config’

-M shmem_dir – shared-memory-repository directory

-m – ‘master recover’ mode restricted to a single user

-p sa_name – login name of SA user

-q – recover quiesced user databases as for ‘load database’

-r mirror_file – master device’s mirror device name

-s server_name – server name

-T trace_flag – set command-line trace flag

-u sa/sso_name – login name of SA/SSO user

-v – print version message then exit

-w database_name – database name to rewrite

-y keypassword – password to decrypt server’s private key

-Z [size_spec] – initial master database size specifier

-z page_size – server page size specifier

-X – start this server as sybmon

usage: bcp [[database_name.]owner.]table_name[:slice_number] {in | out} datafile

[-m maxerrors] [-f formatfile] [-e errfile]

[-F firstrow] [-L lastrow] [-b batchsize]

[-n] [-c] [-t field_terminator] [-r row_terminator]

[-U username] [-P password] [-I interfaces_file] [-S server]

[-a display_charset] [-q datafile_charset] [-z language] [-v]

[-A packet size] [-J client character set]

[-T text or image size] [-E] [-g id_start_value] [-N] [-X]

[-M LabelName LabelValue] [-labeled]

[-K keytab_file] [-R remote_server_principal] [-C]

[-V [security_options]] [-Z security_mechanism] [-Q] [-Y]

[-x trusted.txt_file]

 

 

ASE 15.7 ESD#2 New Features
==============================

• Automatic compressed Share Memory dump
• In-Row Large Object Compression
• create database Asynchronously
• Shared Query Plans
• User-Defined Optimization Goal
• Expanded Maximum Database Size
• alter table drop column without datacopy
• Enhancements to dump and load : Dump Configuration, History.
• Hash-Based Update Statistics
• Concurrent dump database and dump transaction Commands
• Fast-Logged Bulk Copy
• Enhancements to show_cached_plan_in_xml
• Merging, Splitting & Moving Partitions
• Non blocking Reorg
• Deferred Table Creation
• Granular Permissions & Predicate Priviliages
ASE 15.7 New Features
=============================

• Application Functionality Configuration Group
• ASE Thread-Based Kernel: The ASE kernel us now thread-based instead or process-based
• Data Compression : Use less storage space for the same amount of data, reduce cache memory consumption and improve performance because of lower I/O demands
• New Security Features: End-to-end CIS Kerberos authentication, dual control of encryption keys and unattended startup, secure logins, roles and password management and login profiles
• Abstract Plans in Cached Statements: Abstract plan information can be saved in statement cache
• Shrink Log Space: Allows you to shrink the log space and free storage without re-creating the database using the alter database command to remove unwanted portions of a database log
• Display Currently Set Switches: Allows visibility of all traceflags at the server and session level
• Changes for Large Objects: Includes storing in-row LOB columns for small text, image and unitext datatypes, storing declared SQL statements containing LOBs, indirectly referencing a LOB in T-SQL statements, and allows checking for null values of large objects
• Showing Cached Plans in XML: Allows showplan output in XML for a statement in cache
• Padding a Character Field Using str: Fields can be padded with a specified character or numeric
• Changes to select for update: Allows select for update command to exclusively lock rows for subsequent updates within the same transactio and for updatable cursors
• Creation of non-materialized, non-NULL columns
• Sharing Inline Defaults: Allows sharing inline defaults between different tables in the same db
• Monitoring data is retained to improve query performance
• Dynamic parameters can be analyzed before running a query to avoid inefficient query plans
• Monitor Lock Timeouts
• Enable and disable truncation of trailing zeros from varbinary and binary null data
• Full Recoverable DDL: Use dump transaction to fully recover the operations that earlier versions of Adaptive Server minimally logged
• Transfer Rows from Source to Target Table Using merge.
• View Statistics and Histograms with sp_showoptstats: Allow you to extract and display, in an XML document, statistics and histograms for various types of data objects from system tables
• Changes to Cursors: Changes to how cursors lock, manage trnasactions and are declared
• Nested select Statement Enhancements: Expands the abilities of the asterisk (*)
• Some system procedures can run in sessions that use chained transaction mode
• Expanded Variable-Length Rows: Redefines data-only locked (DOL) columns to use a row offset of upto 32767 bytes. Requires a logical page size of 16K to create wide, variable-length DOL rows. 
• Like Pattern Matching: Treat square brackets individually in the like pattern-matching algorithm
• Quoted Identifiers: Use quoted identifiers for tables, views, column names, index names and system procedure parameters
• Allow Unicode Noncharacters: Enable permissive unicode configuration parameter, which is a member of enable functionality group, allows you to ignore Unicode noncharacters
• Reduce Query Processing Latency: Enables multiple client connections to reuse or share dynamic SQL lightweight procedures (LWPs)
• The sybdiag Utility: A new Java-based tool that collects comprehensive ASE configuraiton and environment data for use by Sybase Technical Support
• The optimizer Diagnostic Utility: Adds the sp_opt_querystats system procedure, which allows you to analyze the query plan generated by the optimizer and the factors that influenced its choice of a query plan

ASE 15.5 New Features
==============================

• In-memory databases provide improved performance by operating entirely in-memory and not reading/writing transactions to disk.
• Relaxed-durability for disk-resident databases delivers enhanced performance by eliminating committed transactions.
• “dump database” and “load database” functionality is provided for both in-memory and relaxed-durability databases.
• Faster compression for backups is provided by two new compression options (level 100 and 101).
• Backup Server support is now available for IBM’s Tivoli Storage Manager.
• Deferred name resolution allows the creation of stored procedures before the referenced objects are created in the database.
• FIPS 140-2 encryption is now provided for login passwords that are transmitted, stored in memory or stored on disk.
• Incremental Data Transfer allows exporting specific rows, based on either updates since the last transfer or by selected rows for an output file, and does so without blocking ongoing reads and updates.
• The new bigdatetime and bigtime datatypes provide microsecond level precision. 
• You can now create and manage user-created tempdb groups, in addition to the default tempdb group.
• The new monTableTransfer table provides historical transfer information for tables.
• The new system table, spt_TableTransfer, stores results from table transfers.
• The sysdevices table has been modified to list the in-memory storage cache under the “name” and “phyname” columns.
• Auditing options have been added to support in-memory and relaxed-durability databases, incremental data transfer, and deferred name resolution.

 

Checkstorage will detect allocation errors, it is a reasonable substitute for dbcc checkalloc (checkstorage will report a fair number of issues that checkalloc will not, many of them trivial things, and checkalloc may be able to detect a few odd conditions
checkstorage does not.

What checkstorage won’t catch are issues with index tree (keys out of order, index entries that point to missing rows, rows
that are not indexed).  So checkstorage is not a good substitute for checkdb.

June 24th, 2012AnVaNo comments

Few days back I have faced performance issue in one of our prod data server. I would like to share here.

User was running batch for pushing 90000 rows in a database and batch was not moved from last 1.5 hrs.

On login in the server I found response of the server was not good, it was taking more time to execute a simple query as usual. In first glance, it was looking like user job is hogging the resources, as user job spid was in syslogshold and not moved from long time.

We do some analysis and finally found the cpu usage for server was 100%. ( I used sp_monitor). I concluded that this high cpu usage is slowing down the server performance.

The next task was finding the query which was taking more cpu time. As server was on 15 version, I ran the below sql querry for mon tables for getting the high cpu usage.

select top 10  s.SPID, s.CpuTime, t.LineNumber, t.SQLText from master..monProcessStatement s, master..monProcessSQLText t where s.SPID = t.SPID order by s.CpuTime DESC

http://sybaseblog.com/sybasewiki/index.php?title=Query_-_Which_currently_executing_queries_are_consuming_the_most_CPU_%3F

We asked application team to check the reported spid and if possible, please abort the tran. There was select queries which were  taking maximum cpu . As they requested us to kill, we aborted/killed from data server.

After few seconds, data server cpu started fluctuating from 50 to 100% and finally it was below 50%.

Application batch of 10K inserts moved very quickly and finally issue resolved.

You can get full details on MDA queries @ http://sybaseblog.com/sybasewiki/index.php?title=Category:MDA_Table_Query

Thanks.

Index:

Indexes are the most important physical design element in improving database performance:

Indexes help to avoid table scans. A few index pages and data pages can satisfy many queries without requiring reads on hundreds of data pages.

Indexes in ASE:

We can divide ASE indexes in two categories by : i) Physical Order of Data with index key  ii) Uniqueness of the index column

Based on the physical order of data, Adaptive Server provides two general types of indexes that can be created at the table or at the partition level:

• Clustered indexes, where the data is physically stored in the order of the keys on the index:

• For all pages-locked tables, rows are stored in key order on pages, and pages are linked in key order.

• For data-only-locked tables, indexes are used to direct the storage of data on rows and pages, but strict key ordering is not maintained.

• Non clustered indexes, where the storage order of data in the table is not related to index keys

Based on index column uniqueness, indexes can be unique and non unique.

So following types of indexes present in ASE with permutation and combination with above two properties:

1. Unique Clustered Indexes

2. Non unique Clustered Indexes

3. Unique Non-clustered Indexes

4. Non unique Non-clustered indexes

 

 

Clustered Index

Non Clustered Indexes

Unique

Unique Clustered Index

Unique Non Clustered Index

Non – Unique

Non Unique Clustered Index

Non Unique Non Clustered Index

So, all the indexes come under above 4 types:

 

In case of more than one index column, we can add prefix composite in above types.

Means Composite indexes are those indexes, which are created on more than on one column. Above four types of index can be composite as well.

In the case of partitions, we can categories above types as local and global indexes.Local indexes get created at partition level and table level index called as Global indexes.

Global indexes with one index tree cover the whole table, or local indexes with multiple index trees, each of which covers one partition of the table.

Function-based indexes are a type of non clustered index which use one or more expressions as the index key.

Isolation Level ??
=============

Data concurrency: means that many users can access data at the same time.

Data consistency: means that each user sees a consistent view of the data, including visible changes made by the user’s own transactions and transactions of other users.

Isolation : is a property that defines how/when the changes made by one operation become visible to other concurrent operations. Isolation is one of the ACID property.

Lower isolation levels increase transaction concurrency at the risk of allowing transactions to observe a fuzzy or incorrect database state. These incorrect state you need to manage at application design.

4 Isolation Levels:
===================

The ANSI/ISO SQL-92 specifications define four isolation levels:

(1) READ UNCOMMITTED.
(2) READ COMMITTED.
(3) REPEATABLE READ.
(4) SERIALIZABLE.

Lower Isolation level —> Higher concurrency, Data consistancy low, Reducing the locking overhead.
Higher Isolation Level —> Lower Concurrency, High Data Consistancy, Possible More Deadlock in multi user enviorment.

Three preventable phenomena
===========================

P1 (Dirty Read): Transaction T1 modifies a data item. Another transaction T2 then reads that data item before T1 performs a COMMIT or ROLLBACK. If T1 then performs a ROLLBACK, T2 has read a data item that was never committed and so never really existed.

P2 (Non-repeatable or Fuzzy Read): Transaction T1 reads a data item. Another transaction T2 then modifies or
deletes that data item and commits. If T1 then attempts to reread the data item, it receives a modified value or discovers
that the data item has been deleted.

P3 (Phantom): Transaction T1 reads a set of data items satisfying some . Transaction T2
then creates data items that satisfy T1’s and commits. If T1 then repeats its read with the
same , it gets a set of data items different from the first read.

—————————————————————————–
Isolation Level Dirty Read Nonrepeatable Read Phantom Read
——————————————————————————
Read uncommitted Possible Possible Possible
Read committed Not possible Possible Possible
Repeatable read Not possible Not possible Possible
Serializable Not possible Not possible Not possible

When we execute dbcc sqltext without putting on traceflag 3604 and 3605
where the out put of sqltext goes? In errorlog?

No,For errorlog we have traceflag 3605.

Lets explore the RUN server file again:

/opt/sybase/ASE-15_0/bin/dataserver
-d/opt/sybase/devices/master.dat
-e/opt/sybase/ASE-15_0/install/PROD_ASE_DS1.log
-c/opt/sybase/ASE-15_0/PROD_ASE_DS1.cfg
-M/opt/sybase/ASE-15_0
-sPROD_ASE_DS1 > /dev/null

-e : denoting the errorlog file where all error message and informational messages resides.

AS we know, When we are running any binary file, the output of that binary displays on the screen.

What about the output of $SYBASE/$YSBASE_ASE/bin/dataserver binary,
generally we redirect it to null device (/dev/null) as above.

Now,I am redirecting the output to file like below as in /tmp/sybaselog.out file.

/opt/sybase/ASE-15_0/bin/dataserver
-d/opt/sybase/devices/master.dat
-e/opt/sybase/ASE-15_0/install/PROD_ASE_DS1.log
-c/opt/sybase/ASE-15_0/PROD_ASE_DS1.cfg
-M/opt/sybase/ASE-15_0
-sPROD_ASE_DS1 > /tmp/sybaselog.out

Run the dbcc sqltext command , the result would be display in dataserver output file, without any traceflag.

It means when we require any output on user screen and errorlog, need to enable the traceflag 3604 and 3605 respectively,
otherwise it will be display in sybase dataserver binary , out put file ,if we are redirecting it to file.

sybase@localhost ~]$ isql -Usa -SPROD_ASE_DS1

Password:

1> select @@spid

2> go

 

 ——

     14

 

(1 row affected)

1> select name from sysdatabases

2> go

 name

 ————————————————————

 master

 model

 sybsecurity

 sybsystemdb

 sybsystemprocs

 tempdb

 

(6 rows affected)

 

1> dbcc sqltext(14)

2> go

DBCC execution completed. If DBCC printed error messages, contact a user with

System Administrator (SA) role.

1> dbcc sqltext(14)

2> go

DBCC execution completed. If DBCC printed error messages, contact a user with

System Administrator (SA) role.

1>

 

[sybase@localhost ~]$ tail -f /tmp/sybaselog.out

00:00:00000:00001:2011/09/28 08:51:15.11 server  ASE’s default unicode sort order is ‘binary’.

00:00:00000:00001:2011/09/28 08:51:15.11 server  ASE’s default sort order is:

00:00:00000:00001:2011/09/28 08:51:15.11 server         ‘bin_iso_1′ (ID = 50)

00:00:00000:00001:2011/09/28 08:51:15.11 server  on top of default character set:

00:00:00000:00001:2011/09/28 08:51:15.11 server         ‘iso_1′ (ID = 1).

00:00:00000:00001:2011/09/28 08:51:15.11 server  Master device size: 500 megabytes, or 256000 virtual pages. (A virtual page is 2048 bytes.)

00:00:00000:00001:2011/09/28 08:51:15.11 kernel  Warning: Cannot set console to nonblocking mode, switching to blocking mode.

SQL Text: SELECT fid=right(space(80)+isnull(convert(varchar(80),fid),’NULL’),3), spid=right(space(80)+isnull(convert(varchar(80),spid),’NULL’),4), status=SUBSTRING(convert(varchar(80),status),1,10), loginame=SUBSTRING(convert(varchar(80),loginame),1,8), origname=SUBSTRING(convert(varchar(80),origname),1,8), hostname=SUBSTRING(convert(varchar(80),hostname),1,21), blk_spid=right(space(80)+isnull(convert(varchar(80),blk_spid),’NULL’),8), dbname=SUBSTRING(convert(varchar(80),dbname),1,6), tempdbname=SUBSTRIN

SQL Text: select @@spid

 

SQL Text: select name from sysdatabases

Please let me knwo if you have any more thoughts!!

Adaptive Server uses compiled objects to contain vital information about each database and to help you access and manipulate data.

  • A compiled object is any object that requires entries in the sysprocedures table,including:
  1. Check constraints
  2. Defaults
  3. Rules
  4. Stored procedures
  5. Extended stored procedures
  6. Triggers
  7. Views
  8. Functions
  9. Computed columns
  10. Partition conditions

Compiled objects are created from source text, which are SQL statements that describe and define the compiled object.

When a compiled object is created, Adaptive Server:

  • Parses the source text, catching any syntactic errors, to generate a parsed tree.
  • Normalizes the parsed tree to create a normalized tree, which represents the user statements in a binary tree format. This is the compiled object.
  • Stores the compiled object in the sysprocedures table.
  • Stores the source text in the syscomments table.


4) If production server went down what all the steps u will follow?

First I will intimate to all the application mangers and they will send an alert message to all the users regarding the down time.
Then I will look into the errorlog and take relevant action based on the error message, If I couldn’t solve the issue, I will intimate to my DBA manager further log the case with Sybase as priority P1 (System down).


5) What will you do If you heard Server performance is down?

First check the network transfer rate using ping -t network port, might be the network problem, will contact the network people, make sure that tempdb size is good enough to perform the user connections, mostly tempdb size should be 25% of all the users database size. Make sure that we run the update statistics and recompile the stored procedures sp_recompile on regular basis, also check the database fragment level, if necessary defrag exercise, run the sp_sysmon , sp_monitor and analyze from the output like cpu utilization etc.,

6) Query performance down?

Based on the query first will run the set show plan on to see how the query is being executed, and analyze the output, based on the output will tune the query, if necessary we should create indexes on the used tables. And also based on the output we will check whether the optimizer is picking the right plan or not, run the optdiag to check when the last we had run the update statistics as optimization of the query depends on the statistics,
run the sp_recompile,
so that the stored procedures will pick the new plan based on the current statistics.

7) How do check the current running processes?

ps –eaf


8 ) What u need to do is issue an ASE kill command on the connection then un-suspend the db?

select lct_admin(“unsuspend”,db_id(“db_name”))

9) What command helps you to know the process running on this port, but only su can run this command?

/var/tmp/lsof | grep 5300 (su)
netstat -anv | grep 5300 (anyone)


10) For synchronizing the logins from lower version to higher version, just take the 11.9.2 syslogins structure, go to 12.5 higher version server?

create the table named as logins in the tempdb will this structure, run bcp in into this login table, next use master to run the following commands, insert into syslogins select *,null,null from tempdb..logins


11) How to delete UNIX files which are more than 3 days old?

You must be in the parent directory of snapshots and execute the below command

find snapshots – type f -mtime +3 –exec rm{};
find /backup/logs/ -name daily_backup* -mtime +21 -exec rm –f{};


12) How to find the time taken for rollback of the processed?

kill 826 with statusonly


13) What is the difference between truncate_only & no_log?

Truncate_only and no_log options are used to prune the transaction log without making the copy of it.

i)truncate_only: It is used to truncate the log gracefully. It checkpoints the database before the truncating the Database. Truncate only – removes the inactive part of the log without making a backup copy. Use on databases without log segments on a separate device from data segments. Don’t specify a dump device or backup server name. Use dump transaction with no_log as a last resort and use it only after dump transaction truncate_only fails.

ii)no_log: Use no_log when your transaction log is completely full no_log doesn’t checkpoint the database before the dumping the log no_log removes the inactive part of the log without making a backup copy, and without recording the procedure in the transaction log. Use no_log only when you have totally run out of the log space and can’t run usual dump transaction command. Use no_log as last resort and use it only after dump transaction with truncate_only fails.
When to use dump transaction that truncate_ only or with no_log
When the log Is on the same segment as the data. Dump transaction with truncate only to truncate the log.
You’re not concerned with the recovery of recent transactions ( for example, in an early development environment). Dump transaction with truncate_only to truncate the log your usual method of dumping the transaction log (either the standard dump transaction command or dump transaction with truncate_only) fails because of insufficient log space. Dump transaction with no_log to truncate the log without recording the event.
Note: dump database immediately afterward to copy the entire database, including the log.

14) Define Normalization?

It is a process of designing database schema, where in eliminating the redundancy of columns and inconsistency of database.

15) What are the types of normalization?

First, normal form
The rules for First Normal Form are:
i)Every column must be atomic. It cannot be decomposed into two or more subcolumns.
ii)You cannot have multivalued columns or repeating groups
iii)Each row and column position can have only one value.
Second normal form
For a table to be in second normal form, every non-key field must depend on the entire primary key, not on part of a composite primary key. If a database has only single-field primary keys, it is automatically in Second normal form.
Third normal form
For Table to be in Third normal form, a non-key field cannot depend on another non-key field.

16) What are the precautions taken to reduce the down time?

disk mirroring or warm stand by.

17) What are the isolation levels?

Specifies the kinds of actions that are not permitted while the current transactions execute. The ANSI standard defines four levels of isolation for SQL transactions. Level 0 prevents other transactions from changing. The user controls the isolation level with the set option transaction level or with the at isolation clause of select or readtext. Level 3 is equivalent to doing al queries with hold lock. The default is level 1. Also called “locking level”.

Isolation level are of 4 types. They are

Level 0: allow dirty reads
Level 1: prevents dirty reads
Level 2: prevents dirty reads & non-repeatable reads
Level 3: prevents phantom reads (dirty reads, non-repeatable reads, phantom reads)


18) What is optdiag?

The optdiag utility displays statistics from the systabstats and systatistics tables. optdiag can also be used to update systatistics information. Only a SA can run the optdiag (A command line tool for reading, writing and simulating table, index, and column statistics).

Advantages of optdiag

optdiag can display statistics for all the tables in a database, or for a single table
optdiag output contains addition information useful for understanding query costs, such as index height and the average row length.
optdiag is frequently used for other tuning tasks, so you should have these reports on hand

Disadvantages of optdiag

It produces a lot of output, so if you need only a single piece of information, such as the number of pages in the table, other
methods are faster and have lower systems overhead.

19) How frequently you defrag the database?

When ever there is large modification done on tables like insertions, updations & deletions in a table we do defrag.

20) In 12.5 how to configure procedure cache?

sp_cacheconfig

21) What is isolation level, list different isolation levels in Sybase & what is default

To avoid the manual overriding of locking, we have transaction isolation level which are tied

with transaction.

List of different isolation levels are isolation level 0,1,2,3.

Isolation level 1- this allow read operation can only read pages. No dirty reads are allowed.

Isolation level 0-This allows reading pages that currently are being modified. It allows dirty

read

Isolation level 2-Allows a single page to be read many times within same transaction and guarantees that same value is read each time. This prevent other users to read

Isolation level 3- preventing another transaction from updating, deleting or inserting rows for pages previously read within transaction

Isolation level 1 is the default.

 

Sybase Interview Questions Part-2

1) What are the default page sizes in ASE 12.5?

Default page sizes are 2K,4K,8K,16K

2) How do you see the performance of the Sybase server?

using sp_sysmon, sp_monitor, sp_who and sp_lock

3) What are the different types of shells?

Bourne Shell, C-Shell, Korn-Shell


4) What is the difference between Bourne shell and K shell?

Bourne shell is a basic shell which is bundled with all UNIX file systems. Where as Korn shell is superset of Bourne shell. It has got more added features

like alias in the longest name and longest file name. It has got history command which can display up to 200 commands.

5) How do you see the CPU utilization on UNIX?

using sar & top

6) How to mount a file system?

with mount


7) How do you get a port number?

netstat –anv |grep 5000
/var/tmp/lsof |grep 5300

8 ) How do you check the long running transactions ?

using syslogshold


9) What is an Index? What are the types of Indexes?

Index is a separate storage segment created for the table.

There are two types of indexes they are clustered index and non-clustered index.

Clustered Index. Vs Non-Clustered Indexes

Typically, a clustered index will be created on the primary key of a table, and non-clustered indexes are used where needed.

Non-clustered indexes

Leaves are stored in b-tree
Lower overhead on inserts, vs. clustered
Best for single key queries
Last of page index can become a ‘hot spot’
249 non cluster indexes per table

Clustered index

Records in table are sorted physically by key values
Only one clustered index per table
Higher overhead on inserts, if re-org on table is required
Best for queries requesting a range of records
Index must exist on same segment as table

Note: With a “lock datapages” or “lock datarows” … clustered indexes are sorted physically only upon creation. After that, the indexes behave like non-clustered index.


10) What is your challenging task?

Master database recovery


11) What are the dbcc commands?

The database consistency checker (dbcc) provides commands for checking the logical and physical consistency of a database.

Two major functions of dbcc are:

i) Checking page linkage and data pointers at both page level and row level using checkstorage or checktable and checkdb.

ii) Checking page allocation using checkstorage, checkalloc, checkverify, tablealloc and indexalloc, dbcc checkstorage, dbcc checktable, dbcc checkalloc,

dbcc indexalloc, dbcc checkdb.


12) How to find on Object Name from a Page Number?

dbcc page(dbid,pageno)


13) What is table partitioning?

Is splitting the large tables into smaller, with alter table (table name) partion#

14) What is housekeeping task?

When ASE is idle; it raises the checkpoint that automatically flushes the dirty reads from buffer to the disk.


15) What are the steps you take if your server process gets slow down?

It is an open-ended answer, as far as I am concerned

i) first I will check the network speed (ping -t)
ii) then I see the errorlog
iii) I check the indexes
iv) I see the transaction log
v) tempdb
vi) check when it run last update statistics, if it is not I will update the statistics followed by sp_recompile.


16) How do you check the Sybase server running from UNIX box?

ps –ef |grep “server name” & showserver


17) What are the db_options?

Truncate log on checkpoint, abort tran on log full, select into bulk copy / pll sort, single user, dbo use only, no recovery on checkpoint


18) How do you recover the master database?

First I see that important system tables are taken dumps are clean.like sysdevices, sysdatabases, sysusages, sysalternates, syslogins, sysloginroles Then, I

will build the new master device using buildmaster
I will shutdown the server
Restart the server with usermode -m in runserverfile
Load the dumps of 5 important systables
Check the system tables dumped
Restart in normal mode.


19) How do you put master database in single-user mode?

using –m


20) How do you set the sa password?

In runserver file –Psa

 

Sybase Interview Questions Part-3

1) What is hotspot?

Multiple transactions inserting in a single table

2) How do you check the current run level in UNIX?

who –r

 

3) What is defncopy?

It is a utility, used to copy the definitions of all objects of a database. From a database to an operating system file or from an operating system file to database. Invoke the defncopy program directly from the operating system. defncopy provides a non-interactive way of copying out definitions (create statements) for views, rules, defaults, triggers, or procedures from a database to an operating system file.


4) What is bcp?

It is a utility to copy the data from a table to flat file and vice versa

5) What are the modes of bcp?

Fast bcp & Slow bcp are two modes. bcp in works in one of two modes.
Slow bcp – logs each row insert that it makes, used for tables that have one or more indexes or triggers.
Fast bcp – logs only page allocation, copying data into tables without indexes or triggers at fastest speed possible.

To determine the bcp mode that is best for your copying task, consider the

· Size of the table into which you are copying data
· Amount of data that you are copying in
· Number of indexes on the table
· Amount of spare database device space that you have for re-creating indexs

Fast bcp might enhance performance; however, slow bcp gives you greater data recoverability.

6) What are the types in bcp?

bcp in & bcp out


7) What is defrag?

Defrag is deleting the indexes & recreating the indexes. So that the gap space will be filled.

8 ) What is the prerequisite for bcp?

We need to set select into bulk copy.


9) What is slow bcp?

In this indexes will be on the table.

10) What is fast bcp?

In this there won’t be any indexes on the table..

11) Will triggers fires during bcp?

No, trigger won’t fire during bcp.

12) What is primary key, foreign key and unique key?

Unique key: It is a unique key which won’t allow null values in a table. It is associated with clustered index.
Primary key: The column or columns whose value uniquely identify a row in a table. It is a which allows null values. It is associated with non-clustered index.
Foreign Key: A key column in a table that logically depends on a primary key column in another table. Also, a column ( or combination of columns) whose values are required to match a primary key in some other table.

13) What is candidate key, alternate key & composite key?

Candidate key: A primary key or unique constraint column. A table can have multiple candidate keys.
Alternate key: Alternate key is a key which is declared as a second key in composite key.
Composite key: An index key that includes two or more columns; for example authors(au_lname,au_fname)

14) What’s the different between a primary key and unique key?

Both primary key and unique enforce uniqueness of the column on which they are define. But by default, primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.

15) How do you trace H/W signals?

with TRAP command.


16) What is a natural key?

A natural key is a key for a given table that uniquely identifies the row.


17) What are the salient features of 12.5?

i) different logical page sizes (2,4,8,16k)
ii) data migration utility is there.
iii) default database sybsystemdb is added.
iv) Compressing the datafiles in a backup server.
v) Wider columns
vi)Large number of rows
vii)In version 12 we have buildserver, here we have dataserver


18) What are different statistic commands you use in UNIX?

i/o stat, netstat, vmstat, mpstat, psrstat


19) What do you mean by query optimization?

It is nothing but assigning indexes to a table, so that query optimizer will prepare a query plan for a table & update the values in a table. With this performance increases.

20) What are locks?

lock: A concurrency control mechanism that protects the integrity of data and transaction results in a multi-user environment. Adaptive Server applies page or table locks to prevent two users from attempting to change the same data at the same time, and to prevent processes that are selecting data from reading data that is in the process of being changed.

 

Sybase Interview Questions Part-4

1) What are levels of lock?

page level, table level, row level,

2) What is deadlock ?

A dead lock occurs when two or more user processes each have a lock on a separate page or table and each wants to acquire a lock on other process’s page or table. The transaction with the least accumulated CPU time is killed and all of its work is rolled back.


3) What is housekeeper?

The housekeeper is a task that becomes active when no other tasks are active. It writes dirty pages to disk, reclaims lost space, flushes statistics to

systabstats and checks license usage.

4) What are work tables? What is the limit?

work tables are created automatically in tempdb in Adaptive server merge joins, sorts and other internal processes. There is a limit for work tables to 14.

System will create max of 14 work tables for a query.

5) What is update statistics?

Updates information about distribution of key values in specified indexes or for specified columns, for all columns in an index or for all columns in a table.

Usage: ASE keeps statistics about the distribution of the key values in each index, and uses these statistics in its decisions about which indexes to use in query processing.

Syntax: update statistics table_name [[index_name]| [(column_list)]]
[ using step values]
[ with consumers = consumers ]

update index statistics table_name [index_name]
[ using step values]
[ with consumers = consumers ]

6) What is sp_recompile?

Causes each stored procedure and trigger that uses the named table to be recompiles the next time it runs.

Usage: The queries used by stored procedure and triggers are optimized only once, when they are compiled. As you add indexes or make other changes to your database that affect its statistics, your compiled stored procedures and triggers may lose efficiency. By recompiling the stored procedures and triggers that act on a table, you can optimize the queries for maximum efficiency.

7) What is a difference between a segment and a device?

A device is, well, a device: storage media that holds images of logical pages. A device will have a row in the sysdevices table.
A fragment is a part of a device, indicating a range of virtual page numbers that have been assigned to hold the images of a range of logical page numbers belonging to one particular database. A fragment is represented by a row in sysusages.
A segment is a label that can be attached to fragments. Objects can be associated with a particular segment (technically, each indid in sysindexes can be associated with a different segment). When future space is needed for the object, it will only be allocated from the free space on fragments that are labeled with that segment.
There can be up to 32 segments in a database, and each fragment can be associated with any, all, or none of them (warnings are raised if there are no segments associated). Sysusages has a column called segmap which is a bitmapped index of which segments are associated, this maps to the syssegments table.

8 ) Do we have to create sp_thresholdaction procedure on every segment or every database or any other place!?

You don’t *have* to create threshold action procedures for any segment, but you *can* define thresholds on any segment. The log segment has a default “last chance” threshold set up that will call a procedure called “sp_thresholdaction”. It is a good idea to define sp_thresholdaction, but you don’t have to – if you don’t you will just get a “proc not found” error when the log fills up and will have to take care of it manually.
Thresholds are created only on segments, not on devices or databases. You can create them in sysprocedures with a name starting like “sp_” to have multiple databases share the same procedure, but often each database has its own requirements so they are created locally instead.

9) When to run a reorg command?

reorg is useful when:

• A large number of forwarded rows causes extra I/O during read operations.
•Inserts and serializable reads are slow because they encounter pages with noncontiguous free space that needs to be reclaimed.
• Large I/O operations are slow because of low cluster ratios for data and index pages.
•sp_chgattribute was used to change a space management setting (reservepagegap, fillfactor, or exp_row_size) and the change is to be applied to all existing rows and pages in a table, not just to future updates.

10) What is bit datatype and what’s the information that can be stored inside a bit column?

bit datatype is used to store Boolean information like 1 or 0 (true or false). Until SQL Server 6.5 bit datatype could hold either a 1 or 0 and there was no support for NULL. But from SQL Server 7.0 onwards, bit datatype can represent a third state, which is NULL.


11) What are different types of triggers?

Trigger is an event. That gets fires when an event occurs, such as Insert, Delete, Update. There are 3 types of triggers available with Sybase.

12) How many triggers will be fired if more than one row is inserted?

The numbers of rows you are inserting into a table, that many number of times trigger gets fire.

13) What are advantage of using triggers?

To maintain the referential integrity.

14) How do you optimize a stored procedure?

By creating appropriate indexes on tables. Writing a query based on the index and how to pick up the appropriate index.


15) How do you optimize a select statement?

Using the SARG’s in the where clause, checking the query plan using the set show plan on. If the query is not considering the proper index, then will have to force the correct index to run the query faster.


16)How do you force a transaction to fail?

By killing a process you can force a transaction to fail.


17) What are constraints? Explain different types of constraints?

Constraints enable the RDBMS enforce the integrity of the database automatically, without needing you to create triggers, rule or defaults.

Types of constraints: NOT NULL, CHECK, UNIQUE, PRIMARY KEY, FOREIGN KEY

18) What are the steps you will take to improve performance of a poor performing query?

This is very open ended question and there could be a lot of reasons behind the poor performance of a query. But some general issues that you could talk about would be: No indexes, table scans, missing or out of date statistics, blocking, excess recompilations of stored procedures, procedures and triggers

without SET NOCOUNT ON, poorly written query with unnecessarily complicated joins, too much normalization, excess usage of cursors and temporary tables.

Some of tools /ways that help you trouble shooting performance problems are : SET SHOWPLAN ON

19) What you do when a segment gets full?

Wrong: a segment can never get full (even though some error messages state something to that extent). A segment is a “label” for one or more database device fragments; the fragments to which that label has been mapped can get full, but the segments themselves cannot. (Well, Ok, this is a bit of trick question… when those device fragments full up, you either add more space, or clean up old / redundant data.)

20) Is it a good idea to use data rows locking for all tables by default?

Not by default, only if you’re having concurrency (locking) problems on a table, and you’re not locking many rows of a table in a single transaction, then you could consider datarows locking for that table. In all other cases, use either data pages or all pages locking.
(data pages locking as the default lock scheme for all tables because switching to datarows locking is fast and easy, whereas for all pages locking, the entire table has to be converted which may take long for large tables. Also, datapages locking has other advantages over all pages, such as not locking index pages, update statistics running at level 0, and the availability of the reorg command)

 

Sybase Interview Questions Part-5

1) Is there any advantage in using 64-bit version of ASE instead of the 32-bit version?

The only difference is that the 64-bit version of ASE can handle a larger data cache than the 32-bit version, so you’d optimize on physical I/O.
Therefore, this may be an advantage if the amount of data cache is currently a bottleneck. There’s no pint in using 64-bit ASE with the same amount of “total memory” as for the 32-bit version, because 64-bit ASE comes with an additional overhead in memory usage – so that net amount of data cache would actually be less for 64-bit than 32-bit in this case.

2) What is difference between managing permissions through users and groups or through user-defined roles?

The main difference is that user-defined roles (introduced in ASE 11.5) are server-wide and are grated to logins. Users and groups (the classic method that has always been there since the first version of Sybase) are limited to a single database. Permission can be grated / revoked to both

user-defined roles and users / groups. Whichever method you choose, don’t mix ‘m, as the precedence rules are complicated.

3) How do you BCP only a certain set of rows out of a large table?

If you’re in ASE 11.5 or later, create a view for those rows and BCP out from the view. In earlier ASE versions, you’ll have to select those rows into a separate table first and BCP out from that table. In both cases, the speed of copying the data depends on whether there is a suitable index for retrieving the rows.

4) What are the main advantages and disadvantages of using identity columns?

The main advantage of an identity column is that it can generate unique, sequential numbers very efficiently, requiring only a minimal amount of I/O. The disadvantage is that the generated values themselves are not transactional, and that the identity values may jump enormously when the server is shutdown the rough way (resulting in “identity gaps”). You should therefore only use identity columns in applications if you’ve addressed these issues (go here for more information about identity gaps).

5) Is there any disadvantage of splitting up your application data into a number of different databases?

When there are relations between tables / objects across the different databases, then there is a disadvantage indeed: if you would restore a dump of one of the databases, those relations may not be consistent anymore. This means that you should always back up a consistent set of databases is the unit
of backup / restore. Therefore, when making this kind of design decision, backup/restore issues should be considered (and the DBA should be consulted).

6) How do u tell the data time of server started?

select “Server Start Time” = crdate from master..sydatabases where name = “tempdb” or select * from sysengines

7) How do your move tempdb off of the master device?

This is Sybase TS method of removing most activity from the master device :

Alter tempdb on another device:

1> alter database tempdb on …
2> go
drop the segments:

3>sp_dropsegment “default”, tempdb, master
4> go
5> sp_dropsegment “logsement”,tempdb,master
6> go
7> sp_dropsegment “system”, tempdb, master
8> go

 What are the 4 isolation levels, which was the default one?

· Level 0 – read uncommitted/ dirty reads
· Level 1 – read committed – default.
· Level 2 – repeatable read
· Level 3 – serializable

9) Describe differences between chained mode and unchained mode?

· Chained mode is ANSI-89 complaint, where as unchained mode is not.
· In chained mode the server executes an implicit begin tran, where as in unchained mode an explicit begin tran is required.


10) dump transaction with standby_access is used to?

provide a transaction

 

Sybase Utilities

Utilities:

  • Ø Optdiag -Displays optimizer statistics or loads updated statistics into system tables.

The advantages of Optdiag are-

o Optdiag can display statistics for all tables in a database, or for a single table.
o Optdiag output contains addition information useful for understanding query costs, such as index height and the average row length.
o Optdiag is frequently used for other tuning tasks, so you should have these reports on hand.

  • Ø Isql – Interactive SQL parser to Adaptive Server.

Syntax –isql –Uuser –Sserver –Ddatabase.

When connecting to the server with isql, it goes to the file (sql.ini for Windows/interface.ini for Solaris), and finds the path for the server.

  • Ø ddlgen- This is used to take a back up of a table structure
  • Ø Defncopy- This is used to take a backup of defaults, views, rules stored procedures and triggers.
  • Ø Bcp – two options in and out. Out is to extract the data from the e object to flat file and In is vice versa. Again In have two options, fast bcp (non-logged) and slow bcp (logged). For copying back the data the option must be set to true- sp_dboption “select into /pllsort”, true.

 

Update Stats & Sp_recompile

The update statistics helps Optimizer to prepare the best plan for the query based on the density of the index key value in the sysstatistics table.

Execute/schedule update statistics on heavily modified user objects on daily basis.

SP_RECOMPILE causes each stored procedure and trigger that uses the named table to be recompiled the next time it runs.

Syntax update statistics table_name [index_name].

Syntax sp_recompile objname.

 

Segments

Segment can be describes as the logical name that can be give to a single/fraction/more devices.

Two types of segments- System and User

System defined- System, Default & Log

System- stores all the data related to system tables in that particular database

Log- all data modifications in the database are temporarily stored in log

Default- stores the data related to user created data objects

User defined- A max of 32 segments can be created in a database including the 3 system segments.

SYSSEGMENTS, SYSUSAGES system tables stores detail information regarding the segments, DB size, etc.

The Data and Log segments for a single database should not be placed on a single device in order to improve the performance and recovery is impossible.

Before deleting the segments we should ensure that the objects associated to that segment are dropped.

When we add the additional space to the database system and default segments will automatically extends on new device where as user created segments has to be manually extended.

There are 3 ways in which we can move tables form one segment to another.

Using bcp- Take a backup of the objects and then copy them back to new segment.

Using clustered index- By re-creating a clustered index for the object on new segment.

Using sp_placeobject – It moves the next upcoming records to the new segment.

Syntax 
Creates seg_name in the current database and device name
sp_addsegment seg_name, db_name, device_name: db_name matches to current database

sp_placeobject segment_name, object_name, db_name: future allocation will be mapped to new segment. Object name can be table name, or index name, (‘tab_name.index_name’)

sp_helpsegment seg_name: Displays information about all the segments in current database. If specified display information about only one segment which is specified

Sp_dropsegment segment_name, db_name, [device_name]: drops the segment in the current database

Sp_extendsegment, segname, db_name, device_name

 

MDA Tables

MDA tables provides detailed information about server status, the activity of each process in the server, the utilization of resources such as data caches, locks and the procedure cache, and the resource impact of each query that’s run on the server.

Steps that need to be followed during installing MDA tables-

Check for sp_configure ‘enable cis’ and set to 1.

Add ’loopback’ server name alias in master – sp_addserver loopback, null, @@servername

Install MDA tables – isql -U sa -P -S –i ~/scripts/installmontables

Assign ‘Mon_role’ to logins allowed MDA access- grant role Mon_role to sa

To test for basic configuration – select * from master..monstate

Assign several configuration parameters like enable monitoring to 1, sql text pipe active to 1, sql text pipe max messages to 100, plan text pipe active to 1, plan text pipe max messages to 100, statement pipe active to 1, statement pipe max messages to 100, errorlog pipe active to 1, errorlog pipe max messages to 100, deadlock pipe active to 1, deadlock pipe max messages to 100, wait event timing to 1, process wait events to 1, object lockwait timing to 1, sql batch capture to 1, statement statistics active to 1, per object statistics active to 1, max sql text monitored to 2048

 

What is Database?

1)Database is collection of data objects (Tables, Views, Stored Procedures, Functions, Triggers and indexes).
2)Number of databases that can be created under one adaptive server depends on the configuration parameter “number of databases”
3)All the information regarding databases created in single adaptive server can be viewed in system table SYSDATABASES and the database space usage in SYSUSAGES.
4)Databases are broadly divided into system and user databases.
System databases are default databases created during adaptive server installation (master, model, tempdb & sysbsystemprocs) few system databases are optional and can be created/configured by DBA (sybsecurity, sybsyntax, DBCC and sybsystemdb). Some of them are Sybsecurity, sybsystemdb, pubs, sybsyntax, and dbcc.
User databases can only be created by the system administrator or whoever has the system administrator privileges. A max of 256 Databases can be created on single adaptive server.

Syntax :-

Create database database_name
[On {default | database_device} [= size]
[, database_device [= size]]…]
[Log on database_device [= size]
[, database_device [= size]]…]
[With {override | default_location = “pathname”}]
[For {load | proxy_update}]

Options:
With override: must be specified when data and log segments are placed on a same Database device.
For load: does not initialize the allocated space which saves times when a dump will be loaded next

Alter database db_name on data_dev2= ’100M’

Drop database db_name: drops the database which is not currently in use and not contain any constraints referring to other databases.

Dbcc dbrepair (db_name, dropdb)

Sp_helpdb: displays information about specified database. When used without any db name displays information about all databases. When db name is current database then displays even segment information.

Sp_helpdb db_name, ‘device_name’ displays device fragments in alphabetical order; default order in which device fragments are added

sp_spaceused: Displays total space used by all the tables in current database
sp_spaceused appl3

sp_renamedb olddb_name, new_dbname (db must be in single user mode)

Master DB stores information regarding all other databases, Logins, Devices, etc. It keeps track of all the databases. It has nearly 32 system tables. Some of them are syslogins, sysdb, sysdevices, sysroles, sysprocess, etc. Server wide details are stored in here. It’s the heart of the Server.

Model DB is the template for all the databases, excluding the Master.

Temp DB can be referred as a workspace for the users to perform operations. It’s a volatile memory, so whenever the server is rebooted, it has to be recreated using the template from Model DB.

Three kinds of table are created in the Tempdb,

Session level (table name prefixed with #) – A session level temporary table exists till the expiry of the session of the user

Global level (table name prefixed with tempdb) – A global level temporary tables exists till the server is rebooted

Workable tables- System creates this kind of tables like for sorting purpose.

SYBSYSTEMPROCS DB stores all the system procedures

July 30th, 2012andrewmephNo comments

I have been involved for the past two months in analyzing migration problems of two large local ASE sites.  I decided to share with you the things discovered during the failed ASE 15 migration analysis so that if you happen to be in a similar situation you may discover the way out with less pains.

For these customers, migration to ASE 15.0- ASE 15.5 has been a painful fiasco for two consecutive years.  Cases have been opened.   Professional Services have been sent on site.  A lot of work have been done on rethinking and rewriting code for the new optimizer whims.  Tears, money, and what not shed all through the process.

The truth is, Sybase TS has been telling us  for years that we have bad code, and we – as customers or support teams – were each time infuriated by the insolence of telling us this.  I cannot say that TS has been completely wrong. I can say, however, that were we thinking more WHAT is so peculiar about our code rather than WHY are we told that our code sucks we might have spared ourselves a lot of pain.

I will not write you a detailed report on what we have found here in the blog pages – it will require a lot of pyrotechnics to make things legible here.  Rather, I attach you the report of the study.  You may download it and read at your leisure.   I think it is worth the pains.  Who knows, may be it will solve migration problems for more customers out there.  Local customers were not SO peculiar after all.

Here is the link:  Migration to ASE 15 – 2 Case Studies Involving Prepared Statements.

For those who have little time reading this, let me just warn:  if you use prepared statements in your application code – awares or unawares – beware.  You may be paying very high penalty for this.  Especially in ASE 15 that has been made to work fast – sometimes very fast.  The penalty may be so high that you will consistently fail migrating your old ASE 12.5.x servers to ASE 15 without knowing that the solution is so close.

Here a preview of some data:

 

Have fun reading this.  I have had a lot of fun digging up the roots of the failed migrations (using my own tools, to be sure, and writing new ones along the way).

If you have any questions – be my guest.

Cheerfully,

A.T.M.

Categories:ASEDatabaseTroubleshootingTags:ASE 15Case StudyMigration

July 22nd, 2012Anurag DubeyNo comments

Query — How to set the optimization Goals?

Answer — Optimization goals allow you to choose an optimization strategy that best fits your query environment:

• allrows_mix – the default goal, and the most useful goal in a mixed-query environment. allows_mix balances the needs of OLTP and DSS query environments.
• allrows_dss – the most useful goal for operational DSS queries of medium to high complexity. Currently, this goal is provided on an experimental basis.
• allrows_oltp – the optimizer considers only nested-loop joins.

At the server level, use sp_configure. For example:sp_configure “o

ptimization goal”, 0, “allrows_mix”
At the session level, use set plan optgoal. For example:

set plan optgoal allrows_dss

At the query level, use a select or other DML command. For example:
select * from A order by A.a plan ”(use optgoal allrows_dss)”
In general, you can set query-level optimization goals using select, update, and delete statements. However, you cannot set query-level optimization goals in pure insert statements, although you can set optimization goals in insert…select
statements.

Courtesy: Sybooks

Categories:ASETags:ASEoptimizationSybase

July 22nd, 2012Anurag DubeyNo comments

Query — When does the optimization of statements in a stored procedure transpired, at compile time or at run time?

Answer — ASE 15.0.2 and later defers the query optimization of stored procedures until it execute the statement.

 

Query — Why does stored procedure runs slower at first execute?

Answer — Stored procedure runs slower at first execute because it performs the optimization of query and creates query plan and stores the query plan in cache. Hence, In subsequent  run it picks up the existing query plan from the cache and runs faster.

 

Query — What is the difference between stored procedure’s query plans of ASE 12.5.0 and ASE 15.0.2/later?

Answer — ASE versions before 15.0.2 used to create query plan of stored procedure at compile time wherein values of variables were not available, thereby sometimes stored procedure would not run as expected.

However, in versions 15.0.2 and later,execution engine step has been segregated into 2 steps 1. Procedural execution engine and 2. Query execution engine

The procedural engine executes command statements such as create table, execute procedure, and declare cursor directly. For data manipulation language (DML) statements, such as select, insert, delete, and update, the engine sets up the execution environment for all query plans and calls the query execution engine.

The query execution engine executes the ordered steps specified in the query plan provided by the code generator, which have the values of variables used in stored procedure. Hence, now query plans of stored procedure are more accurate than query plans of stored procedure in previous versions.

Courtesy: Sybooks

Categories:ASETags:Abstract planASEQuery planStored procedureSybase

July 22nd, 2012Anurag DubeyNo comments

Query —  What is the difference between Physical and logical reads?

Answer — Physical reads are defined as pages read from the disk.

Logical reads are defined as pages read from the main memory (RAM/cache).

Categories:ASETags:ASELogical I/OLogical readPhysical I/OPhysical readSybase

July 22nd, 2012Anurag DubeyNo comments

Query — What is  ”Index Coverage of the query”?

Answer — It is defined as whether the query can be satisfied by retrieving data from the index pages without accessing the data pages . ASE can use indexex that covers query, even if no where clause are included in the query.

 

Query —  Can a query use index even if no where clause are included in the query?

Answer — Yes, as described above ASE can use indexes that covers query, even if no where clause are included in the query.

July 22nd, 2012Anurag DubeyNo comments

Query –> What are the 6 modules of query processor?

Answer –>

1. Parser

2. Normalization

3. Preprocessor

4. Optimizer

5. Code generator

6. (1) Procedural execution engine

6. (2) Query execution engine

Query   –> At which level in Query Processor Modules we can determine if the statement may benefit from using the statement cache?

Answer  –> Level 2 (Normalization)

 

Query  –> What does Normalization infers while query parsing (Query Processor Modules) ?

Answer  –> Normalization involves determining column and table names, transforming the query tree into conjugate normal form (CNF), and resolving datatypes.

Query –> Define “Query Plan” ?

Answer –> Query Plan consists of  retrieval tactics and an ordered set of execution steps, which retrieve the data needed by the query.

Courtesy: sybooks

 

 

 

Sybase-DBA-User-Guide-for-Beginners

Tags

, , , , , , , , , ,

SYbase DBA

Sybase DBA Manual

 

 

Sybase DBA

11/10/2010

 

 

It is a humble attempt from our end to welcome the readers into the intriguing and amazing world of SYBASE. Outmost care has been taken to ensure that it is easily grasped even by the novices. This document hopefully will become the first step before you launch yourself into the profession of DATABASE ADMINISTRATION. Few of the topics have been copied from http://www.sybase.com.

 

 

 

Contents

 

What is DBMS/RDBMS?  4

Duties of DBA   4

ASE Overview and Architecture Diagram    4

What is Database?  5

How is data stored in database?  7

Disk Initialization   9

New Page Allocation Procedure in ASE  10

Segments  11

Thresholds  12

Roles & Groups  12

Logins & User  13

Interface & Error Log File   13

ASE Memory Usage   14

Db_options  16

Configuration Parameters  16

Indexes  17

Update Stats & Sp_recompile   20

Locks & Isolation Level 20

Phases When Query Is Executed & Process Status  22

Hit/Miss Diagram    24

Start and Shut Down Of Server  26

Backup/Recovery/Refresh/Restore   26

Dbcc  27

MDA Tables  28

Multiple Temp Databases  29

Utilities  29

Troubleshooting  30

How to Apply EBF  31

Query and Server Performance Tuning  32

Calculations  33

Sybase Diagram    34

Replication Overview and Architecture Diagram    34

Crontab   37

 

 

 

 

 

 

 

 

 

 

 

 

What is DBMS/RDBMS?

A Data Base Management System is a collection of programs that enable users to create and maintain a database. It can also be said that DBMS is a process of managing data for efficient retrieval and storage of data. Hence it is general purpose software that facilitates the processes of defining, constructing, manipulating and sharing databases among various users.

The RDBMS is a database management system that that stores data in the form of tables and there is also a relationship that exists between the tables. In a RDBMS data and information is acquired by the relations or tables.

Duties of DBA

The following are some of the duties of a DBA-

  • Checking the server status (automate the job in crontab to monitor the server status).
  • Ensure that backups happen daily.
  • Health check for all the databases.
  • Performance related tasks (automate Update statistics & sp_recompile in crontab).
  • Rebooting the servers during maintenance window.
  • Security Management (adding logins/users with proper approvals from application/technical leads).
  • Proactively monitoring the database data & log growth (threshold setup).
  • Monitoring error logs.

ASE Overview and ArchitectureDiagram

 

Adaptive Server Enterprise (ASE) has long been noted for its reliability, low total cost of ownership and superior performance. With its latest version, ASE 15, it has been dramatically enhanced to deliver capabilities urgently needed by enterprises today. It lays the long-term foundation for strategic agility and continuing innovation in mission-critical environments. ASE 15 provides unique security options and a host of other new features that boost performance while reducing operational costs and risk. Find out how you can exploit new technologies such as grids and clusters, service-oriented architectures and real-time messaging.

 

ASE 15 meets the increasing demands of large databases and high transaction volumes, while providing a cost effective database management system. Its key features include on-disk encryption, smart partitions and new, patent-pending query processing technology that has demonstrated a significant increase in performance, as well as enhanced support for unstructured data management. ASE is a high-performance, mission-critical database management system that gives Sybase customers an operational advantage by lowering costs and risks.

 

System Processes

NH

SH

DH

ASE

Uses the 2K pool for Roll forward / Rollback

User Databases

Data Server

Error log

Interface

Configuration File

Master DB recovered

Master Device Initialized

<Servername>.krg

System Devices and Databases

User Devices

File System / Raw Devices

DB Options

Segments / Thresholds

Pool

SHARED MEMORY

MAX

MEMORY

ASE executables

PROCEDURE CACHE

USER LOG CACHE

DATA CACHE

STACK SPACE

Default 2K pool

Wash Marker can range from 20% to 80 %

MRU

LRU

POOL

Wash Area

Statement Cache

 

 

Figure 1 Architecture Diagram

 

 

 

 

 

 

What is Database?

 

  • Database is collection of data objects (Tables, Views, Stored Procedures, Functions, Triggers and indexes).
  • Number of databases that can be created under one adaptive server depends on the configuration parameter “number of databases”
  • All the information regarding databases created in single adaptive server can be viewed in system table SYSDATABASES and the database space usage in SYSUSAGES.
  • Databases are broadly divided into system and userdatabases.
    • System databases are default databases created during adaptive server installation (master, model, tempdb & sysbsystemprocs) few system databases are optional and can be created/configured by DBA (sybsecurity, sybsyntax, DBCC and sybsystemdb). Some of them are Sybsecurity, sybsystemdb, pubs, sybsyntax, and dbcc.
    • User databases can only be created by the system administrator or whoever has the system administrator privileges.A max of 256 Databases can be created on single adaptive server.

 

  • Syntax

Create database database_name

[On {default | database_device} [= size] 

[, database_device [= size]]…]

[Log on database_device [= size]

[, database_device [= size]]…]

[With {override | default_location = “pathname”}]

[For {load | proxy_update}]

 

  • Options:

With override: must be specified when data and log segments are placed on a same Database device.

For load: does not initialize the allocated space which saves times when a dump will be loaded next

  • Alter database db_name on data_dev2= ‘100M’
  • Drop database db_name: drops the database which is not currently in use and not contain any constraints referring to other databases.
  • Dbcc dbrepair (db_name, dropdb)
  • Sp_helpdb: displays information about specified database. When used without any db name displays information about all databases.  When db name is current database then displays even segment information.   

 

  • Sp_helpdb db_name, ‘device_name’ displays device fragments in alphabetical order; default order in which device fragments are added

 

  • sp_spaceused: Displays total space used by all the tables in current database

sp_spaceused appl3

 

  • sp_renamedb olddb_name, new_dbname (db must be in single user mode)

 

  • Master DBstores information regarding all other databases, Logins, Devices, etc. It keeps track of all the databases. It has nearly 32 system tables. Some of them are syslogins, sysdb, sysdevices, sysroles, sysprocess, etc. Server wide details are stored in here. It’s the heart of the Server.
  • Model DB is the template for all the databases, excluding the Master.
  • Temp DBcan be referred as a workspace for the users to perform operations. It’s a volatile memory, so whenever the server is rebooted, it has to be recreated using the template from Model DB. Three kinds of table are created in the Tempdb,
    • Session level (table name prefixed with #) – A session level temporary table exists till the expiry of the session of the user
    • Global level (table name prefixed with tempdb) – A global level temporary tables exists till the server is rebooted
    • Workable tables- System creates this kind of tables like for sorting purpose.

 

  • SYBSYSTEMPROCS DB stores all the system procedures
  • The data in database is stored in the form of tables.
  • The smallest unit of data storage is Page. 8 contiguous pages is an Extent. The size of a page can be 2KB, 4KB, or 16KB.
  • Table size = 1 Extent * page size.
  • Allocation unit: – Collection of 256 pages is called an Allocation Unit. Each Allocation unit has a first page called Allocation Page, which stores all the information of the pages of the Unit.
  • Object Allocation Map stores about the information of the pages of the Table. In return all the OAM point to the AP of the entire allocation unit where object data is stored.
  • The number of entries per OAM page also depends on the logical page size the server is using. The following table describes the number of OAM entries for each logical page size:

How is data stored in database?

2K logical page size

4K logical page size

8K logical page size

16K logical page size

250

506

1018

2042

 

  • Global Allocation Map (SYSGAMS system table) records and tracks all the information of all the AU’s in the particular database.SYSGAMS is not accessible by any user. For every 8GB of disk space a new GAM is created.

 

 

Figure 2Overview on Allocation Unit

Latch: – Latches are non transactional synchronization mechanisms used to guarantee the physical consistency of a page. While rows are being inserted, updated or deleted, only one Adaptive Server process can have access to the page at the same time. Latches are used for datapages and datarows locking but not for all pages locking.

Note: - The most important distinction between a lock and a latch is the duration

 

Table 1 Difference between Latch and Lock

Latch

Lock

A latch is held only for the time required to insert or move a few bytes on a data page, to copy pointers, columns or rows, or to acquire a latch on another index page

A lock can persist for a long period of time: while a page is being scanned, while a disk read or network write takes place, for the duration of a statement, or for the duration of a transaction

 

 

 

Disk Initialization

 

  • Disk initialization is the process of allocating disk space to server. During initialization adaptive server will divide the new allocated disk into allocation units and an entry is made in sysdevices system table.
  • All the information regarding devices connected to the Server, can be viewed in system table SYSDEVICES.
  • Device that can be connected to a server was up to 256 till version 12.5, later got unlimited from version 15.
  • A disk that is allocated to a server cannot be shared with other servers. Any number of databases can use a disk as long as they are in the same server, restricting to a single file system.
  • Number of devices that can be allocated to the adaptive server depends on the configuration parameter  “number of devices”
  • Disk once initialized to the adaptive server can only be dropped when all the associated databases are dropped
  • Disk default option for system database has to be turned off.

Syntax

Disk init

     Name = “device_name” ,

     Physname = “physicalname” ,

     [Vdevno = virtual_device_number,]

     Size = number_of_blocks

     [, vstart = virtual_address

         , cntrltype = controller_number]

     [, contiguous]

     [, dsync = {true | false}]

ASE 15.0

 

Disk init name = ‘dev2′,

Physname = ‘/data/sql_server.dev2′,

Size = ‘100M’, (vdevno is automatic in 15.0),Directio= true

Default (dsync = true) – disk writes are guranteed

 

Note: dsync/directio applied only for File System.

Directio is faster than dsync both cannot be used.

Maximum devices pre 15.0 256 15.0 onwards 2 million

 

  • sp_helpdevice: shows device details
  • sp_dropdevice device_name: drops the device name and will not delete file in file system. Make sure to drop the database before dropping any device.
  • sp_deviceattr device_name, ‘dsync/directio’, {true, false}

New Page Allocation Procedure in ASE

 

Figure 3 Page Allocation Procedure

Page Header

 

 

 

    3      2      1

1

2

3

Table row offset

GAM

NEXT

Previous

AU1

AU2

AU1       AU2

AP  OAM

Extent

AP  OAM

 

 

OAM

The object is extended to another AU

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Whenever a user inserts some data it first checks for available pages in the current extent (OAM) and inserts into it. If not found a new extent is allocated for that object in the same allocation unit with the help of the allocation page. This extent is mapped to the OAM of the object. If the extent is not available in the same allocation unit it checks with GAM for a new allocation unit (available extent), allocation page where a new extent can be allocated. After the extent is allocated to the object in a different allocation unit, this extent is mapped with the OAM of the object present in the other allocation unit. If there are no allocation units available for the current requested page in the current GAM, a new GAM is created and later altogether the whole process for new page is processed. If the data exceeds 8GB, new GAM comes into picture. In this way GAM, AU, AP, OAM come into picture when a new request for a page is requested.

 

 

Segments

 

  • Segment can be describes as the logical name that can be give to a single/fraction/more devices.
  • Two types of segments- System and User
  • System defined- System, Default & Log
  • System- stores all the data related to system tables in that particular database
  • Log- all data modifications in the database are temporarily stored in log
  • Default- stores the data related to user created data objects
  • User defined- A max of 32 segments can be created in a database including the 3 system segments.
  • SYSSEGMENTS, SYSUSAGES system tables stores detail information regarding the segments, DB size, etc.
  • The Data and Log segments for a single database should not be placed on a single device in order to improve the performance and recovery is impossible.
  • Before deleting the segments we should ensure that the objects associated to that segment are dropped.
  • When we add the additional space to the database system and default segments will automatically extends on new device where as user created segments has to be manually extended.
  • There are 3 ways in which we can move tables form one segment to another.
    • Using bcp- Take a backup of the objects and then copy them back to new segment.
    • Using clustered index- By re-creating a clustered index for the object on new segment.
    • Using sp_placeobjectIt moves the next upcoming records to the new segment.

 

Syntax

Creates seg_name in the current database and device name

sp_addsegment seg_name, db_name, device_name: db_name matches to current database

 

sp_placeobject segment_name, object_name,db_name: future allocation will be mapped to new segment. Object name can be table name, or index name, (‘tab_name.index_name’)

 

sp_helpsegment seg_name: Displays information about all the segments in current database. If specified display information about only one segment which is specified

 

Sp_dropsegment segment_name, db_name, [device_name]: drops the segment in the current database

 

Sp_extendsegment, segname, db_name, device_name

 

Thresholds

 

  • Thresholds monitor the free space in a database and alert the DBA to take appropriate action to prevent the max usage of the database segments coz if neglected the server will hang and is users cannot access.
  • Thresholds can be defined on data and log segments.
  • Two types of thresholds : System&User
    • System level- Last chance Threshold. Usually 18% of log space is reserved for LCT. LCT threshold limit value cannot be modified and is set by the adaptive server automatically. We can only modify the stored procedure Sp_thresholdaction.
    • User level- Free chance Threshold. FCT is defined by the user as per the usage of the database and log segment size.
    • Sp_thresholdaction sends alert if transaction crosses the LCT.
    • A max of 256 thresholds can be created for a Database.
    • All the details regarding the thresholds can be found in SYSTHRESHOLD.
    • FCT’s can be dropped or modified.

 

Syntax

Sp_addthreshold dbname, segname, free_space, proc_name:  To add a threshold.

Sp_modifythreshold dbname, segname, free_space [, new_proc_name] [, new_free_space] [, new_segname]:  To modify a given threshold.

 

Roles & Groups

 

  • Roles provide individual accountability for users performing system administration and security-related tasks. Roles are granted to individual server login accounts, and actions performed by these users can be audited and attributed to them
  • SYSTEMTABLE: sysroles
  • Groups provide a convenient way to grant and revoke permissions to more than one user in a single statement.
  • The sp_addgroup system procedure adds a row to Sysusers in the current database. In other words, each group in a database-as well as each user-has an entry in Sysusers.
  • By default ASE databases has default group as public group.
  • Below are a few descriptions of ROLES-
    • System Administrator (sa_role )
    • System Security Officer (SSO_role)
    • Operator (oper_role )
    • Sybase technical support (sybase_ts_role)
    • Replication (replication_role)
    • Distributed transaction manager (dtm_tm_role)
    • High availability (ha_role)
    • Monitor and diagnosis (Mon_role)
    • Job Scheduler administration (js_admin_role)
    • Real Time messaging (messaging_role)
    • Web Services (web_services)
    • Job scheduler user(js_user_role)

 

Syntax

Create role role_name [with passwd “password”[, {passwd expiration | min passwd length | max failed_logins} option_value] ]:  To create a role.

Sp_addgroup grpname: To create a group.

 

Logins & User 

 

  • SYSLOGIN holds the details which allow people to access the Server level.
  • SYSUSERS holds the details which allow people to access the Databases level.
  • These two above table are related with the column ‘suid’.
  • Syslogins: Suid, dbname,name,password,srvname,procid.
  • Sysusers: Suid, uid, gid, name.

 

Syntax

Sp_addlogin loginame, passwd [, defdb][, deflanguage][, fullname][, passwdexp]: To create a login with a default database.

Sp_adduser loginame [, name_in_db [, grpname]]: To create a user for a login for a database.

 

Interface & Error Log File

 

  • An interface file contains network information about all servers on your network, including Adaptive Server, Backup Server, and XP Server, plus any other server applications such as Monitor Server, Replication Server, and any other Open Server applications.
  • The network information in the file includes the server name, network name or address of the host machine, and the port, object, or socket number (depending on the network protocol) on which the server listens for queries.
  • Dsedit or Dscp utility are used to create an interface file. Using Dsedit or Dscp is preferred than text editor as it’s easier to use and ensures that the interface file is consistent in format.

 

  • Error log is stored externally. All the server level events and severity level >16 will be recorded in the error log.
  • We notice error number, severity level>16 and error message in the errorlog.
  • These error messages can be found in the table sysmessages.
  • Severity level less than 17 are not recorded in the error log, as the affect is minimal. Object level error messages are not included in the error log.
  • We cannot start the ASE without errorlog file.

 

ASE Memory Usage

 

  • Memory is consumed by various configuration parameters, statement cache, procedure cache and data caches.
  • The total memory allocated during boot-time is the sum of memory required for all the configuration needs of Adaptive Server. The total memory required value can be obtained from the read-only configuration parameter ‘total logical memory’.
  • The configuration parameter ‘max memory’ must be greater than or equal to ‘total logical memory’.
  • ‘Max Memory’ indicates the amount of memory you will allow for Adaptive Server needs. During boot-time, by default, Adaptive Server allocates memory based on the value of ‘total logical memory’. However, if the configuration parameter ‘allocate max shared memory’ has been set, then the memory allocated will be based on the value of ‘max memory’.
  • Caches in Max Memory / Adaptive Server-
    • Procedure Cache- Adaptive Server maintains an MRU/LRU (most recently used/least recently used) chain of stored procedure query plans. As users execute stored procedures, Adaptive Server looks in the procedure cache for a query plan to use. If a query plan is available, it is placed on the MRU end of the chain, and execution begins. If more than one user uses a procedure or trigger simultaneously, there will be multiple copies of it in cache. If the procedure cache is too small, a user trying to execute stored procedures or queries that fire triggers receives an error message and must resubmit the query. Space becomes available when unused plans age out of the cache, the default procedure cache size is 3271 memory pages.
    • Statement Cache- The statement cache allows Adaptive Server to store the text of ad hoc SQL statements. Adaptive Server compares a newly received ad hoc SQL statement to cached SQL statements and, if a match is found, uses the plan cached from the initial execution. In this way, Adaptive Server does not have to recompile SQL statements for which it already has a plan.This allows the application to amortize the costs of query compilation across several executions of the same statement.The statement cache memory is taken from the procedure cache memory pool.
    • Data Cache- At the point of installation the default data cache is of 2K memory pool. The data cache contains pages from recently accessed objects, typically:
      • sysobjects, sysindexes, and other system tables for each database
      • Active log pages for each database
      • The higher levels and parts of the lower levels of frequently used indexes
      • Recently accessed data pages
      • The key points for memory configuration are:
        • The system administrator should determine the size of shared memory available to Adaptive Server and set ‘max memory’ to this value.
        • The configuration parameter ‘allocate max shared memory’ can be turned on during boot-time and run-time to allocate all the shared memory up to ‘max memory’ with the least number of shared memory segments. Large number of shared memory segments has the disadvantage of some performance degradation on certain platforms. Please check your operating system documentation to determine the optimal number of shared memory segments. Note that once a shared memory segment is allocated, it cannot be released until the next server reboot.
        • Configure the different configuration parameters, if the defaults are not sufficient.
        • Now the difference between ‘max memory’ and ‘total logical memory’ is additional memory available for procedure, data caches or for other configuration parameters.
        • The amount of memory to be allocated by Adaptive Server during boot-time is determined by either ‘total logical memory’ or ‘max memory’. If this value too high:
        • Adaptive Server may not start, if the physical resources on your machine does is not sufficient.
        • If it does start, the operating system page fault rates may rise significantly and the operating system may need to re configured to compensate.

 

Figure 4 How Adaptive Server uses memory

  • For a good performance, the Hit/Miss ratio should be over 90% and can be found by stored procedure sp_sysmon.

 

 

 

Db_options

  • To change the default settings for the database we use database options.
  • Sp_dboption- displays or changes database options.
  • List of database options are-

Table 2 System Tables DB Options

SNo.

DB OPTION

MASTER

MODEL

TEMPDB

SYBSYSTEMPROCS

1

abort tran on log full

No

Yes

Yes

Yes

2

allow nulls by default

No

Yes

Yes

Yes

3

async log service

No

No

No

No

4

auto identity

No

Yes

Yes

Yes

5

dbo use only

No

Yes

Yes

Yes

6

ddl in tran

No

Yes

Yes

Yes

7

delayed commit

No

Yes

Yes

Yes

8

disable alias access

No

Yes

Yes

Yes

9

identity in nonunique index

No

Yes

Yes

Yes

10

no chkpt on recovery

No

Yes

Yes

Yes

11

no free space acctg

No

Yes

Yes

Yes

12

read only

No

Yes

Yes

Yes

13

single user

No

Yes

No

Yes

14

select into/bulkcopy/pllsort

No

Yes

Yes

Yes

15

trunc log on chkpt

No

Yes

Yes

Yes

16

unique auto_identity index

No

Yes

Yes

Yes

 

  • Syntax- sp_dboption [dbname, optname, optvalue [, dockpt]]

 

Configuration Parameters

  • Config parameter defines the server wide settings and is divided in to static and dynamic.
  • All the static configured parameters values are stored in table sysconfigure.
  • All the dynamic configured parameters values are stored in table syscurconfigure.
  • Config values also stored in the <server name>.cfg file and every time you modify the config values the current <server name>.cfg file will be saved as <server name>.001 and new config values will be appeared in <server name>.cfg file.
  • We cannot start the ASE without valid config file.
  • Always have the backup for config file

 

 

Indexes

  • Indexes are created for the faster retrieval of the data. Indexes are preferred when the total requested records are less than or equal to 5% of total table rows.
  • Whenever a new record is inserted into a table with no index, it stored into the last available page called hot spot. Table without a clustered index is known a Heap Table.
  • When there is no index on the table the user requested query will perform the table scans(scanning each page that is allocated to the object). To avoid the table scan we prefer indexes.
  • Indexes can be broadly divided into two types
    • Clustered Indexes- There can only one clustered index for a table in binary tree format. The leaf nodes contain the data itself. Data is stored in physical order (asc/desc). Total 3 levels (root, intermediate & data/leaf). Whenever we recreate the clustered index the non-clustered index (if exists) will be automatically recreated and update statistics will run on the object automatically (ASE will run internally).
    • Nonclustered Indexes- There can be as many as 249 non clustered indexes for a table.  The leaf nodes contain pointers that map to the actual data. This is the reason why it takes more time for nonclustered index data. It is logical. Total 4 levels (root, intermediate. Leaf & data).
    • All the indexes can be found in table sysindexes for a database. Table is identified by value 0; Clustered index is identified by value 1, where as rest non clustered indexes are identified from 2 to 249.

 

Diagrams that explain clearly about clustered and non clustered index pages.

 

 

 

Figure 5 Non clustered index

 

 

 

 

 

Figure 6 Clustered Index

 

 

 

 

 

 

 

 

 

 

 

Table 3 Page Split, Overflow Pages, Row Forwarding

Page Split(APL)

Overflow Pages

Row Forwarding(DOL)

If there is not enough room on the data page for the new row, a page split must be performed.

·A new data page is allocated on an extent already in use by the table. If there is no free page available, a new extent is allocated.

·The next and previous page pointers on adjacent pages are changed to incorporate the new page in the page chain. This requires reading those pages into memory and locking them.

·Approximately half of the rows are moved to the new page, with the new row inserted in order.

·The higher levels of the clustered index change to point to the new page.

·If the table also has nonclustered indexes, all pointers to the affected data rows must be changed to point to the new page and row locations.

When you create a clustered index for a table that will grow over time, you may want to use fill factor to leave room on data pages and index pages. This reduces the number of page splits for a time.

Special overflow pages are created for nonunique clustered indexes on all pages-locked tables when a newly inserted row has the same key as the last row on a full data page. A new data page is allocated and linked into the page chain, and the newly inserted row is placed on the new page.

The only rows that will be placed on this overflow page are additional rows with the same key value. In a nonunique clustered index with many duplicate key values, there can be numerous overflow pages for the same value.

The clustered index does not contain pointers directly to overflow pages. Instead, the next page pointers are used to follow the chain of overflow pages until a value is found that does not match the search value.

A data-only-locked table is updated so that it no longer fits on the page, a process called row forwarding performs the following steps:

·The row is inserted onto a different page, and

·A pointer to the row ID on the new page is stored in the original location for the row.

Indexes do not need to be modified when rows are forwarded. All indexes still point to the original row ID.

 

Update Stats & Sp_recompile

  • The update statisticshelpsOptimizerto prepare the best planfor the query based on the density of the index key value in the sysstatistics table.
  • Execute/schedule update statisticson heavily modified user objects on daily basis.
  • SP_RECOMPILE causes each stored procedure and trigger that uses the named table to be recompiled the next time it runs.
  • Syntax   update statistics table_name [index_name].
  • Syntax   sp_recompile objname.     
  • Adaptive Server protects the tables, data pages, or data rows currently used by active transactions by locking them. Locking is a concurrency control mechanism: it ensures the consistency of data within and across transactions.
  • Locking affects performance when one process holds locks that prevent another process from accessing needed data. The process that is blocked by the lock sleeps until the lock is released. This is called lock contention.
  • A deadlock occurs when two user processes each have a lock on a separate page or table and each wants to acquire a lock on the same page or table held by the other process. The transaction with the least accumulated CPU time is killed and all of its work is rolled back.
  • Adaptive Server supports locking at the table, page, and row level.
    • Allpages locking, which locks datapages and index pages ,It can acquire just one table-level lock.
    • Datapages locking, which locks only the data pages, It can acquire a lock for each page that contained one of the required rows.
    • Datarows locking, which locks only the data rows, It can acquire a lock on each row.
    • Adaptive Server has two levels of locking:
      • For tables that use allpages locking or datapages locking, either page locks or table locks.
      • For tables that use datarows locking, either row locks or table locks.
      • Page and row locks
        • Shared locks-Adaptive Server applies shared locks for read operations. If a shared lock has been applied to a data page or data row or to an index page, other transactions can also acquire a shared lock, even when the first transaction is active. However, no transaction can acquire an exclusive lock on the page or row until all shared locks on the page or row are released. This means that many transactions can simultaneously read the page or row, but no transaction can change data on the page or row while a shared lock exists.
        • Exclusive locks-Adaptive Server applies an exclusive lock for a data modification operation. When a transaction gets an exclusive lock, other transactions cannot acquire a lock of any kind on the page or row until the exclusive lock is released at the end of its transaction. The other transactions wait or “block” until the exclusive lock is released.
        • Update locks-Adaptive Server applies an update lock during the initial phase of an update, delete, or fetch (for cursors declared for update) operation while the page or row is being read. Update locks help avoid deadlocks and lock contention. If the page or row needs to be changed, the update lock is promoted to an exclusive lock as soon as no other shared locks exist on the page or row.
        • Table locks
          • Intent lock-An intent lock indicates that page-level or row-level locks are currently held on a table. Adaptive Server applies an intent table lock with each shared or exclusive page or row lock, so an intent lock can be either an exclusive lock or a shared lock. Setting an intent lock prevents other transactions from subsequently acquiring conflicting table-level locks on the table that contains that locked page. An intent lock is held as long as page or row locks are in effect for the transaction.
          • Shared lock-This lock is similar to a shared page or lock, except that it affects the entire table. A create nonclustered index command also acquires a shared table lock.
          • Exclusive lock-This lock is similar to an exclusive page or row lock, except it affects the entire table. For example, Adaptive Server applies an exclusive table lock during create clustered index command. Update and delete statements require exclusive table locks if their search arguments do not reference indexed columns of the object.
          • Syslocks contains information about active locks, and built dynamically when queried by a user. No updates to Syslocks are allowed.
          • Deadlock can be tuned with two options –
            • Deadlock checking period specifies the minimum amount of time (in milliseconds) before Adaptive Server initiates a deadlock check for a process that is waiting on a lock to be released.
            • Deadlock retries specifies the number of times a transaction can attempt to acquire a lock when deadlocking occurs during an index page split or shrink.
            • Spinlock ratio- A spinlock is a simple locking mechanism that prevents a process from accessing the system resource currently used by another process. All processes trying to access the resource must wait (or “spin”) until the lock is released.If 100 are specified for the spinlock ratio, Adaptive Server allocates one spinlock for each 100 resources. The number of spinlocks allocated by Adaptive Server depends on the total number of resources as well as on the ratio specified. The lower the value specified for the spinlock ratio, the higher the number of spinlocks.
            • Sp_lock reports information about processes that currently hold locks.
            • Lock promotion can only happen from row to table and page to table level.
            • Config parameters related to locks are :
              • Number of locks
              • Lock scheme
              • Lock wait period
              • Lock spinlock ratio
              • page lock promotion HWM
              • page lock promotion LWM
              • page lock promotion PCT
              • row lock promotion HWM
              • row lock promotion LWM
              • row lock promotion PCT
              • The isolation level controls the degree to which operations and data in one transaction are visible to operations in other, concurrent transactions.
              • There are four levels of isolation which ASE has, of which level 0 is default.
              • Level 0 –also known as read uncommitted, allows a task to read uncommitted changes to data in the database. This is also known as a dirty read, since the task can display results that are later rolled back.
              • Level 1 – also known as read committed, prevents dirty reads. Queries at level 1 can read only committed changes to data. At isolation level 1, if a transaction needs to read a row that has been modified by an incomplete transaction in another session, the transaction waits until the first transaction completes (either commits or rolls back) .
              • Level 2 – also known as repeatable read. It prevents no repeatablereads. These occur when one transaction reads a row and a second transaction modifies that row. If the second transaction commits its change, subsequent reads by the first transaction yield results that are different from the original read.
              • Level 3 – also known as serializable reads. It prevents phantoms. These occur when one transaction reads a set of rows that satisfy a search condition, and then a second transaction modifies the data (through an insert, delete, or update statement). If the first transaction repeats the read with the same search conditions.
              • When ever a query is executed it is executed in three phases parser, compiler and execute. Parser checks for the syntactical errors, compiler checks for the query plan in data cache ( if not found gets backs from database), execute checks for the result in the data cache ( if not found gets back from the database).
              • During the phases of execution, the single process can change to various states depending on the availability of the i/o, query plan, data, etc. Below are mentioned the possible states of a process.

Locks & Isolation Level

Phases When Query Is Executed & Process Status

 

 

Table 4 Process Status

Status

Condition

Effects of kill Command

recv sleep

waiting on a network read

Immediate

send sleep

waiting on a network send

Immediate

alarm sleep

waiting on an alarm, such as wait for delay “10:00″

Immediate

lock sleep

waiting on a lock acquisition

Immediate

Sleeping

Waiting disk I/O, or some other resource. Probably indicates a process that is running, but doing extensive disk I/O

killed when it “wakes up”, usually immediate; a few sleeping processes do not wake up, and require a Server reboot to clear

Runnable

in the queue of runnable processes

Immediate

Running

actively running on one on the Server engines

Immediate

Infected

Server has detected serious error condition; extremely rare

Kill command not recommended. Server reboot probably required to clear process

Background

a process, such as a threshold procedure, run by SQL Server rather than by a user process

Immediate; use kill with extreme care. Recommend a careful check of sysprocesses before killing a background process

log suspend

processes suspended by reaching the last-chance threshold on the log

killed when it “wakes up”: 1) when space is freed in the log by a dump transaction command or 2) when an SA uses the LCT_admin function to wake up “log suspend” processes

 

 

 

 

Hit/Miss Diagram

 

  • During the execution of a query, it goes through many phases, which can result as a HIT or a MISS depending on the availability of the data in the cache.
  • The below diagrams clearly explain the steps involved during HIT or  MISS.

 

If Query Plan found

 

User

Interface file

ASE

ASE E&O

 

 

 

 

Unused Space

Configuration file

Data Cache

Procedure Cache

Executes Query

Parser

Compiler

Execute

Session Created

User Log Cache in Shared Memory

Shared Memory

Figure 7 Steps when a Query is executed by a USER — HIT

 

                                                                                                                                                      Max Memory

Connection Established

                                                                                   Network Handler

                 

                                                               Data Fetched back to USER

                                                                                Send Sleep         HIT                                                                        

                                                                                     Receive Sleep                                                                                             

                                                                                                                                                                                                               

                                                                                                                                                                                                               

 

                                                                        

                                                                         Running

 

 

  1. Connection is established between user and the ASE.
  2. A new session is created for the user.
  3. When a query is fired, it is passed to parser, next to complier and then executed till the result is fetched back to the user.
  4. Parser checks for syntactical errors, compiler checks for existing query plan in procedure cache, execute checks for the corresponding data in the data cache.
  5. If everything is found where expected, it’s called a HIT.

 

 

 

 

 

 

 

SYSSTATISTICS

UPDATE STATS

SYSQUERYPLANS

TABLE

DISK

Optimizer prepares Query Plan

 

User

Interface file

ASE

ASE E&O

 

 

 

 

Unused Space

Configuration file

Data Cache

Procedure Cache

Executes Query

Parser

Compiler

Execute

Session Created

User Log Cache in Shared Memory

Shared Memory

Figure 8 Steps when a Query is executed by a USER — MISS

 

                                                                                                                                                      Max Memory

Connection Established

                                                                                   Network Handler

                 

                                                               Data Fetched back to USER

                                                                                Send Sleep                                                                                         MISS

                                                                                     Receive Sleep                                                                            Sleeping

                                                                                                                                                                                                               

If query plan not found

 

                                                                        

Running

 

 

 

 

 

 

  1. Connection is established between user and the ASE,
  2. A new session is created for the user.
  3. When a query is fired, it is passed to parser, next to complier and then executed till the result is fetched to the user.
  4. Parser checks for syntactical errors, compiler checks for query plan in the procedure cache, if not found it gets prepared by the optimizer, execute checks for data in data cache and if not found gets from the disk and sent back to user.
  5. If any one of them (data or query plan) is not found where expected, it’s called a MISS.

 

Start and Shut Down Of Server

  • To start the ASE execute startserver –f RUN_<server name>. Use – m to start the server in single user mode.
  • The following examples show the RUN_servername file edited to start an Adaptive Server named TEST in single-user modeOn UNIX-
    • #!/bin/sh
    • #
    • # Adaptive Server Information:
    • #  name: TEST
    • #  master device: /work/master.dat
    • #  master device size: 10752
    • #  errorlog: /usr/u/sybase/install/errorlog
    • #  interfaces: /usr/u/sybase/interfaces
    • #
    • /usr/u/sybase/bin/dataserver -d/work/master.dat
    • -sTEST -e/usr/u/sybase/install/errorlog
    • -i/usr/u/sybase/interfaces
    • -c/usr/u/sybase/TEST.cfg –m
    • Use –P option in run server file to generate the password for the SA which prints the new SA password in the errorlog, for this we need to reboot the ASE server.
    • Once the configuration file is loaded, it creates memory in the shared memory and creates the file <server name>.krg. SolarisUnix or any environment, allocates shared memory for the ASE server. In version 12.5.3 a max of 3.7 GB can be allocated to the ASE server.
    • IPC does the job of controlling and monitoring the shared segments of shared memory.
    • Shut down of a server can be in two modes. <Server name>.krg file will be automatically deleted when ASE server goes offline.
      • Wait- it’s a clean shut down which checks and ensures that all transactions are closed.
      • Nowait- It’s a don’t care shut down, which kill all the transactions forcefully.
      • Recovery- Getting the database to the current state of data from a previously maintained backup.
      • Refresh- Loading data from one database to another irrespective of sever.
      • Restore- Taking the database to a previous state.
      • Backup- Taking an extra copy of the existing data.

Backup/Recovery/Refresh/Restore

 

Steps to be followed for test refresh.

  • Take backup for Prod and Test DB, and also db_options for the test should be saved before the refresh operation.
  • BCP out the tables SYSUSERS, SYSPROTECTS and SYSALIASES from Test DB.
  • Bcp test.dbo.sysusers out <file name> -Ulogin –Sserver –Ddatabase.
  • Bcp test.dbo.sysprotects out <file name> -Ulogin –Sserver –Ddatabase.
  • Bcp test.dbo.sysaliases out <file name> -Ulogin –Sserver –Ddatabase.
  • For copying the ddl of a table the command use  ddlgen –Ulogin –Ppassword -S[server] –T[object_type] (if user requested to take the backup for specific tables).
  • Load the Test DB with Prod DB backup.
  • To modify the system table configure the parameters- sp_configure “allow updates on system tables”, 1.
  • Delete the rows from sysusers excluding dbo user-Delete from sysusers where name not in (‘dbo’).
  • For copying back the data to sysusers, sysprotects table we need to execute-
  • Bcp test.dbo.sysusers in<path> -Ulogin –Sserver –Ddatabase.
  • Bcp test.dbo.sysprotects in<path> -Ulogin –Sserver –Ddatabase.
  • To reconfigure the access parameters to system tables - sp_configure “allow updates on system tables”, 0.
  • Online database database name.
  • Set the db_options for the database.
  • Remap the users after load/refresh

 

  • Run dbcc and update stats on need basis.
  • Database consistency checker (dbcc) checks the logical and physical consistency of a database and provides statistics, planning, and repair functionality.
  • dbcc Tableallocchecks the specified user table to ensure that-
    • All pages are correctly allocated.
    • Partition statistics on the allocation pages are correct.
    • No page is allocated that is not used.
    • All pages are correctly allocated to the partitions in the specified table and no page is used until allocated.
    • No page is used that is not allocated.
    • dbcc Checkallocensures that-
      • All pages are correctly allocated
      • Partition statistics on the allocation pages are correct
      • No page is allocated that is not used
      • All pages are correctly allocated to individual partitions andno page used until allocated.
      • No page is used that is not allocated
      • dbcc Checkalloc [(database_name [, fix | nofix] )]
      • dbcc Tableallocchecks the specified user table to ensure that-
        • All pages are correctly allocated.
        • Partition statistics on the allocation pages are correct.
        • No page is allocated that is not used.
        • All pages are correctly allocated to the partitions in the specified table and no page is used until allocated.
        • No page is used that is not allocated.

Dbcc

 

 

  • dbcc indexallocchecks the specified index to see that-
    • All pages are correctly allocated.
    • No page is allocated that is not used.
    • No page is used that is not allocated.
    • dbcc CheckTablechecks the specified table to see that-
      • Index and data pages are linked correctly.
      • Indexes are sorted properly.
      • Pointers are consistent.
      • All indexes and data partitions are correctly linked.
      • Data rows on each page have entries in the row-offset table; these entries match the locations for the data rows on the page.
      • Partition statistics for partitioned tables are correct.
      • Dbcc checkdb runs the same checks as dbcc CheckTable on each table in the specified database. If you do not give a database name, dbcc checkdb checks the current database. dbcc checkdb gives similar messages to those returned by dbcc CheckTable and makes the same types of corrections.
      • DBCCDB database setup
        • Determine Size – sp_plan_dbccdb.
        • Initialize Disk Devices- Based on size create data and log devices.
        • Create dbccdb Database- Create dbccdb on the above created devices.
        • Install Stored Procedures- isql -Usa -P  -SASESERVERiinstalldbccdb -odbccdb_error.out.
        • Configure Adaptive Server- sp_configure “number of worker processes”, 2.
        • Create Workspaces.
        • Set dbccdb Configuration Parameters.
        • Run dbcc checkstorage.
        • Evaluate Configurations.
        • Sp_addserver  PROX_<server name>,NULL,<server name that u mention in interface file>
        • Sp_addexternlogin PROX_<server name>,<login name in source server>, <user at remote server>, <password at remote server>
        • Sp_addobjectdef  proxy_<table name>, “PROX_<server name>,<remote database name>,<remote object owner>,<object name>”,”table”
        • Create proxy_table ,proxy_<table name> at “PROX_<server name>,<remote database name>,<remote object owner>,<object name>”
        • MDA tables provides detailed information about server status, the activity of each process in the server, the utilization of resources such as data caches, locks and the procedure cache, and the resource impact of each query that’s run on the server.
        • Steps that need to be followed during installing MDA tables-
          • Check for sp_configure ‘enable cis’ and set to 1.
          • Add ’loopback’ server name alias in master – sp_addserver loopback, null, @@servername
          • Install MDA tables – isql -U sa -P <password> -S<Server Name> –i ~/scripts/installmontables
          • Assign ‘Mon_role’ to logins allowed MDA access- grant role Mon_role to sa
          • To test for basic configuration – select * from master..monstate
          • Assign several configuration parameters like enable monitoring to 1, sql text pipe active to 1, sql text pipe max messages to 100, plan text pipe active to 1, plan text pipe max messages to 100, statement pipe active to 1, statement pipe max messages to 100, errorlog pipe active to 1, errorlog pipe max messages to 100, deadlock pipe active to 1, deadlock pipe max messages to 100, wait event timing to 1, process wait events to 1, object lockwait timing to 1, sql batch capture to 1, statement statistics active to 1, per object statistics active to 1, max sql textmonitored to 2048
          • Multiple tempdb is useful when a user or an application needs specified tempdb, for its operations. If in any case if the rest tempdb are full, the default tempdb would be helpful in getting back the rest to normal operations. The default tempdb is usually assigned to the SA, to avoid server tempdb full. The number of user created tempdb can be configured according to the available hard ware resources and the user application.
          • Steps to be followed during creation of tempdb-
            • Creating devices for tempdb – Create temporary database <temp db name> on <device name>=<size on device>Log on <log device name>=<size on device>.
            • To add the database as a tempdb- Sp_tempdb add, <tempdb name>, ‘default’.
            • To bind the tempdb – sp_tempdb ‘bind’, ‘lg’, ’<login name>’, ’db’. ‘<Tempdb name>.
            • To unbind the tempdb – sp_tempdb ‘unbind’, ‘lg’, ’<login name>’, ’db’. ‘<Tempdb name>.
            • Optdiag -Displays optimizer statistics or loads updated statistics into system tables.

PROXY TABLES

MDA Tables

Multiple Temp Databases

Utilities

The advantages of Optdiag are-

  • Optdiag can display statistics for all tables in a database, or for a single table.
  • Optdiag output contains addition information useful for understanding query costs, such as index height and the average row length.
  • Optdiag is frequently used for other tuning tasks, so you should have these reports on hand.
  • Isql – Interactive SQL parser to Adaptive Server.

Syntax –isql –Uuser –Sserver –Ddatabase.

When connecting to the server with isql, it goes to the file (sql.ini for Windows/interface.ini for Solaris), and finds the path for the server.

  • ddlgen- This is used to take a back up of a table structure
  • Defncopy- This is used to take a backup of defaults, views, rules stored procedures and triggers.
  • Bcp – two options in and out. Out is to extract the data from the e object to flat file and In is vice versa.  Again In have two options, fast bcp (non-logged) and slow bcp (logged). For copying back the data the option must be set to true- sp_dboption “select into /pllsort”, true.
  • If server config value has reached the max threshold limit for ‘number of open databases’, ‘number of open objects’, ‘number of open indexes’, ‘number of user connections’ & ‘number of locks’.Please followthe steps below.
  • Sp_countmetadata- Gives information about the total number of objects like tables, sp’s, views, triggers, etc. sp_countmetadata “configname” [, dbname].
  • Sp_monitorconfig- Gives information about max usage/current max value of the above mentioned config parameters.
  • Sp_configure- To reconfigure the parameters with new value.
  • To check if the port is opened

Troubleshooting

Telnet <IP ADDRESS><Port Number>

if the port is opened you will see a blank screen in command prompt….

  • To abort all the open transactions when log is full- Select LCT_admin(abort, pid, dbid).

Process to kill the open transactions

  1. The PID for the current running transaction can be identified from table SYSLOGSHOLD.
  2. Once the PID is identified, the user details can be found from the table SYSPROCESSES.
  3. Execute dbcc TRACEON (3604): Turning on the trace flag will display the trace output on console rather than the error log.
  4. Execute dbcc SQL TEXT (spid) : to view the details of the transaction
  5. Execute dbcc TRACEOFF (3604)
  6. To Kill Process- kill PID.
  • To view server status in Unix- showserver, & ps-esf |grep Sybase.
  • To know server version details- select @@version OR records in error log OR dataserver –v.
  • To know the current isolation level used by the server- select @@isolation.
  • For manually clearing shared segments- ipcrm –s/m<pid> and verify the values with <server name>.krg file and once you clear the memory then delete the <server name>.krg file.
  • Also check for any active Sybase process ps –eaf |grep Sybase. Kill the active Sybase process related to the particular server.
  • Memory Jam- It occurs due to the non de-allocation of the shared segments in the shared memory.
  • To recover this we have to check out the number of servers running. Later we have to match the <servername>.krg file with the details available on the shared segment allocated to the server.
  • Delete the identified segment by using- ipcrm –s<PID>.
  • It’s always a better practice to hold a backup of the <servername>.krg file, as it is deleted once the server gets shut down.

Checking the server blocking process

  1. When a process A is blocked by another process B, and B blocked by C, and continued in similar fashion till N number of processes, these all are blocked by the nth process. To unblock process A, all the sub processes till Nth process should be killed or terminated.
  2. To retrieve the blocked and corresponding blocking process id’s, a correlated query on sysprocess table, has to be executed.
  3. Check the final process id

dbcc dbcc traceon(3604)

go

dbcc (sqltext<pid>)

dbcc dbcc traceoff(3604)

go

 Or

dbcc  set tracefile “<file-path>” for spid

go

set show_sqltext on

set showplan on

go

sp_helpapptrace

go

dbcc set tracefile off for spid

go

  1. If it’s a select operation, kill the process. Or else wait till the operation is completed.

Recovering Master Database

  1. Recovering master Database can be done only if a valid dump of Master database exists.
  2. To build it again, steps followed are

                                                               i.      Dataserver –d<device name> -z<page size> -b<size>.

                                                             ii.      User should log into the server, in single user mode.

                                                            iii.      Load the Master DB from previous backup.

                                                           iv.      Restart the server.

  1. Backups of Necessary tables like syslogins, sysusages, sysdevices, sysdatabases, and sysalternatives should always be maintained.

Point-In-Time Recovery

  1. PITR can be done only if a valid full dump of database, transaction log dumps and the dump of current transaction log (dump tran <db name> to ‘<file name>’ with no_truncate, this option will allow to perform the backup if database device fails.).
  2.  To do PITR, steps followed are

                                                         i.            Restore the full backup

                                                       ii.            Restore all the transaction logs in the sequence

                                                      iii.            Restore the most recent transaction log which was dumped with no_truncate option

                                                     iv.            Bring the database online.

Extending Temp Database to separate data and log segments

  1. Initialize data and log devices separately
  2. Extend Temp database to these devices
  3. Sp_configure ‘allow updates’, 1 – Allows updating the sysusages table.
  4. Delete from sysusages where dbid=2 and segmap=<the segment id for master device>
  5. Sp_configure ‘allow updates’, 0
  6. Sp_helpdb tempdb- Shows the details for the Temp database.

Log free space issue (showing minus value in log segment usage)

log on to Sybase server

 

Use master

Go

Sp_stop_rep_agent  <database name >

go

 

Logon to replication server

 

admin health

go

suspend connection to

go

admin who

go

admin health

go

admin disk_space

go

 

log on to Sybase server

 

use master

go

sp_dboption <database name >, ‘dbo use’,true

go

sp_dboption <database name >, ‘single user’,true

go

select spid from sysprocesses where dbid=db_id(‘<database name >’)

go

(if find any active process kill them)

use <database name >

go

checkpoint

go

dbcc traceon(3604)

go

dbcc dbrepair(<database name >,’fixlogfreespace’)

go

dbcc traceon(3604)

go

use master

go

sp_dboption <database name >, ‘dbo use’,false

go

sp_dboption <database name >, ‘single user’,false

go

 

Logon to replication server

 

admin health

go

resume connection to

go

admin who

go

admin health

go

admin disk_space

go

 

log on to Sybase server

 

use master

go

sp_start_rep_agent  <database name >

go

 

 

TIPS

  • To find the ROWID

select rownum = identity(10) , <column name> into #<temp table name> from <table name>
select * from #<temp table name>
drop #<temp table name>
go

 

How to Apply EBF

  • Sybase releases a bulletin and lists the details of the updates and bug fixes (EBF details) for each version of Sybase server. All of these patches / hot fixes are bundled in a package and released with the list of bugs fixed or enhancements of the product
  • Process for Patch Deployment:
  1. Download the patches from the Sybase Site.
  2. Create a temporary directory, and unzip the patch file into this directory. For DOS installations set the temporary directory as your current directory and run setup.exe in that directory. For Windows installations, double click on setup.exe from the explorer in the temporary directory.
  3. The patch ZIP file contains a file called README.TXT. This is a text file documenting bug fixes and changes that were made to this release of the software.
  4. Evaluate the Patches and verify is the patch is suitable for your current version.
  5. Classify them into required and not required. Any patch that is presumed not to be necessary for deployment now can be ignored as the final version of this hot fix will be rolled out in the next service pack of the product.
  6. Run basic DBCC checks to make sure the all the databases are in good condition. If you find any error in DBCC output fix the issue before rolling out the new patch.
  7. Take a backup of all the system, user databases and BCP out of specific system tables.
  8. Lock all the users on the data server as we need to perform some post install steps
  9. Bounce the Server
  10. Deploy the required patches in the test environment.
  11. Run standard upgrade scripts like installmaster,installmsgs,installmontables from $SYBASE/ASE_125/scripts
  12. Verify the output for any issues.
  13. Change the $SYBASE variable in .profile to point to new EBF directory and source it
  14. Copy the run server file into the new location ( if required since most of the clients keep the run server file in $SYBASE/ASE/install )
  15. Bring up the Server.
  16. Verify the errorlog for any issues.
  17. Validate the errorlog and fix the issues with applications.
  18. unlock all the users and handover the server to apps team
  19. If any of the hot fix is causing undesired results do not deploy the fix until further testing.
  20. Once the test environment looks stable with the new patch deployments and all issues seen during the deployment are resolved the same can be moved to production.
  21. Document all the steps, observations and workarounds that have been carried out during the testing phase.
  22. Prepare a Checklist.
  23. Plan for roll out into production.

Query and Server Performance Tuning

  • Below are some tools for Query Tuning
  1. Show Plan- tells you the final decisions that the optimizer makes about your queries.
  2. Set Show Plan – to turn the show plan off or on
  3. Set Statistics I/O – displaysthe number of logical and physical reads and writes required for each table in a query. If resource limits are enabled, it also displays the total actual I/O cost.
  4. Dbcc trace on (3604, 302) – makes you understand why and how the optimizer makes choices. It can help you debug queries and decide whether to use certain options, like specifying an index or a join order for a particular query.
  5. Dbcc trace on (3604, 310) – gives per table I/O estimates.
  6. Dbcc trace on (3604, 317) – gives report about all the plans.
  7. Set no exec on- prepares the query plan with out executing the query.

 

  • Below are some points to be taken care for better performance
  1. Data and Log of tempdb should be placed on different devices.
  2. Data and log of user databases should also be placed on different devices.
  3. Data cache should be configured such that hit/miss ratio should be greater than 90.
  4. Statistics must be updated periodically, and also exec sp_recompile.
  5. Run update index statistics
  6. All Nonclustered Indexes should be placed on a separate segment.
  7. All lookup tables should be placed on a separate segment.
  8. Indexes should be proper. Reorg/Recreate if necessary.
  9. Run reorg  forward_row/reorg rebuild  (select “reorg rebuild” +name+char(10)+”go” from sysobjects where type=’U’ and  lockscheme(name) not it (‘allpages’)

 

  • Sp_monitor - Displays statistics about Adaptive Server. Adaptive Server keeps track of how much work it has done in a series of global variables. Sp_monitor displays the current values of these global variables and how much they have changed since the last time the procedure executed.
  • Sp_sysmon – Displays performance information, displays information about Adaptive Server performance. It sets internal counters to 0, and then waits for the specified interval while activity on the server causes the counters to be incremented. When the interval ends, sp_sysmon prints information from the values in the counters.
  • If you face the performance issues.

Check the physical_io in sysprocesses table and trace the spid details

Dbcc traceon(3604)

Dbcc sqltext(spid)

Sp_who

Sp_lock

Sp_object_stats “00:10:00”

Sp_monitorconfig “all”

 

Calculations

  • To find vdev number-  Select max (low/16777218) from Master..sysdevices.
  • Procedure cache size= (max number of concurrent users)*(4+size of largest plan)*1.25.
  • Minimum procedure cache size needed=(number of main procedures)*(Average plan size).
  • To get a rough estimate of the size of a single stored procedure, view, or trigger, use: select(count(*) / 8) +1     from sysprocedures where id = object_id(“procedure_name”).
  • number of worker processes = [max parallel degree] X [the number of concurrent connections wanting to runqueries in parallel] X [1.5].
  • Size of the tempdb = 20% of sum up of all the user databases.
  • Size of Log disk= 10 % of the data disk of the particular device.
  • Size of DBCC database- can be found out from sp_plan_dbccdb.
  • Stored procedure to get the segment usage report

 

  • SQL to calculate database usage

select  “Database,” = convert(char(20),db_name(dbid))+’,’,

“Data Size,” = str(sum(size * abs(sign(segmap – 4))) / 512.0, 7, 2)+’,’,

“Data Used,” = str(sum((size – curunreservedpgs(dbid, lstart, unreservedpgs)) * abs(sign(segmap – 4))) / 512.0, 7, 2)+’,’,

“Data Free,” = str(100.0 * sum((curunreservedpgs(dbid,

lstart,unreservedpgs)) * abs(sign(segmap – 4))) / sum(size * abs(sign(segmap- 4))), 3) + “%”+’,’,

“Log Size,” = str(sum(size * (1 – abs(sign(segmap – 4)))) / 512.0, 7, 2)+’,’,

“Log Used,” = str(sum((size – curunreservedpgs(dbid, lstart, unreservedpgs))* (1 – abs(sign(segmap – 4)))) / 512.0, 7, 2)+’,’,

“Log Free” = str(100.0 * sum((curunreservedpgs(dbid,

lstart,unreservedpgs))* (1 – abs(sign(segmap – 4)))) / sum(size * (1 – abs(sign(segmap – 4)))), 3) + “%”

from master..sysusages

where segmap < 5

group by db_name(dbid)

  • SQL to find the database and the related devices

select dbid, size,name, phyname “physical device”from sysusages, sysdevices where name = ‘xxx’ and vstart between low and high compute sum(size)

go

  • SQL to find the max CPU utilized

Select spid,suser_name(suid),hostname,program_name,physical_io,memusage,ipaddr from sysprocesses order by physical_io

go

Select name,accdate,totcpu,totio  from syslogins order by totcpu

go

  • Entry in interface file – master tli tcp /dev/tcp x0002 0401 81 96 c451.

This can be interpreted as-

X0002    no user interpretation (header info?)

 0401     port number (1025 decimal)

 81          first part of IP address (129 decimal)

 96          second part of IP address (150 decimal)

 c4          third part of IP address (196 decimal)

 51          fourth part of IP address (81 decimal)

 

Sybase Diagram

 

 

Replication Overview and Architecture Diagram

  • Sybase Replication Agent is the Sybase solution for replicating table data changing operations and stored procedure invocations against a primary database.
  • Sybase Replication Agent extends the capabilities of Replication Server by allowing non-Sybase (heterogeneous) database servers to act as primary data servers in a replication system based on Sybase replication technology.
  • Rep Server is configured by using command rs_init.
  • Primary Dataserver – It is the source of data where client applications enter/delete and modify data. This need not be ASE; it can be Microsoft SQL Server, Oracle, DB2, and Informix.
  • Replication Agent/Log Transfer Manager- Log Transfer Manager (LTM) is a separate program/process which reads transaction log from the source server and transfers them to the replication server for further processing. With ASE 11.5, this has become part of ASE and is now called the Replication Agent.  However, you still need to use an LTM for non-ASE sources.  When replication is active, one connection per each replicated database in the source dataserver (sp_who).
  • Replication Server (s) – The replication server is an Open Server/Open Client application.  The server part receives transactions being sent by either the source ASE or the source LTM.  The client part sends these transactions to the target server which could be another replication server or the final dataserver.
  • Replicate (target) Dataserver – It is the server in which the final replication server (in the queue) will repeat the transaction done on the primary. One connection for each target database, in the target dataserver when the replication server is actively transferring data (when idle, the replication server disconnects or fades out in replication terminology).
  • Stable Queue- after Replication Server is installed; a disk partition is set upused by Replication Server to establish stablequeues. During replication operations, Replication Server temporarily stores updates in these queues. There are three different types of stable queues, each of which stores different type of data.
    • Inbound Queue- holds messages only from a Replication Agent. If the database you add contains primary data, or if request stored procedures are to be executed in the database for asynchronous delivery, Replication Server creates an inbound queue and prepares to accept messages from a Replication Agent for the database.
    • Outbound Queue- holds messages for a replicate database or a replicate Replication Server. There is one outbound queue for each of these destinations:
      • For each replicate database managed by a Replication Server, there is a Data Server Interface (DSI) outbound queue.
      • For every Replication Server to which a Replication Server has a route, there is a Replication Server Interface (RSI) outbound queue.
  • Subscription Materialization Queue- holds messages related to newly created or dropped subscriptions. This queue stores a valid transactional “snapshot” from the primary database during subscription materialization or from a replicate database during dematerialization.
  • Stable Queue Manager manages all these operations related to Stable Device.
  • Data Server Interface- It connects RDS and Rep Server. It reads data from Outbound Queue and replicates to RDS according to the subscriptions.
  • Distributor- It takes care of sending committed transaction from inbound queue to outbound queue with the help of Stable Queue Transaction (SQT) Reader.
  • Any exceptions encountered during the process are recorded into rs_exception.
    • Whenever two servers required to be connected over WAN, they both have to be set into a single domain.
    • If Rep Servers are more than 10, Rep monitoring service (RMS) is required to manage all of them. Otherwise they are managed by Replication Manager.
    • There can be multiple rep definitions for a single table.
    • Multiple servers can be connected in two ways- Hierarchal and Star.
    • The time to reach from PDS to RS is known as latency.

 

 

 

LOG TRANSFER LANGUAGE

REPLICATION SERVER

REP AGENT

LOG

DATA SERVER INTERFACE

STABLE DEVICE

Stable   Queue

Managed by Stable Queue Manager

Subscriptions

AM

NAM

NM

BM

REPLICATE DATA SERVER

PRIMARY DATA SERVER

Rep Definition

Transaction log

Secondary Transaction Point found in rs_loacter

Committed Transactions

DISTRIBUTOR

Stable Queue Transaction Reader

INBOUND QUEUE

OUTBOUND QUEUE

Figure 9 Replication Architecture

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Sybase License Definition

License Definitions

Crontab

  • Job scheduling at UNIX level is done in crontab. All the DBA jobs are scheduled to run automatically in crontab. Each user on UNIX level has own crontab and one should have the privilege to add/modify the crontab.
  • Syntax-crontab [ -e [opens the crontab editor] | -l [lists all the crontab entries] | -r [removes the crontab file].
  • Always have the backup for crontab.

SYBASE ALL IN ONE

Tags

Attending Sybase DBA interview is not an easy task. You have to use your expertise and experience to answer to the point based on your environment. But this post helps you in attending the interview to an extent.

 

1. Tell me something about yourself?

Explain your education, Family background and work experience.

2. What are the system roles and status by default?

sa_role, sso_role and oper_role are system roles. They are on by default.

 

3.What are the daily activities as a Sybase DBA?

check the status of the server (using ps –eaf |grep servername) or

with showserver at OS level  or

try to login

 

if it fails we should understand that server is not up …   then we have

to start the server after looking the errorlogs.

 

check the size the file system (df –k).

check the status of the database (sp_helpdb)

check the schedule cron job

check whether any process is blocked (sp_who and sp_lock)

see if we have to take backups / load database

check the errorlog

 

4.What are the default databases in ASE-12_5?

master, model, tempdb, sybsystemprocs, sysbstemdb

optional db’s pubs2,pubs3, sybsecurity, audit, dbccdb

 

5.Tell about your work environment?

I worked on ASE 12.5.3 on Solaris 8 version.

Altogether we have 4 ASE servers on 4 different Solaris boxes

Out of them 2 or productions boxes ,1 is UAT and 1 is Dev servers

On production boxes we have 2 cpus on each box, on UAT we have 2 cpus and on Dev server 4 cpus.

Total we have 180 databases, 60@ prod and 60 @ dev.

Biggest database size is 30GB

No of users 5000 in production.

We are handling the tickets received through emails (any production issues).

 

6.If production server went down what all the steps u will follow?

First I will intimate to all the application mangers and they will send an alert message to all the users regarding the down time.

Then I will look into the errorlog and take relevant action based on the error message, If I couldn’t solve the issue, I will intimate to my DBA manager further log the case with Sybase as priority P1 (System down).

 

7.   What will you do If you heard Server performance is down?

First check the network transfer rate using ping -t network port, might be the network problem, will contact the network people, make sure that tempdb size is good enough to perform the user connections, mostly tempdb size should be 25% of  all the users database size.  Make sure that we run the update statistics and recompile the stored procedures sp_recompile on regular basis, also check the database fragment level, if necessary defrag exercise, run thesp_sysmon , sp_monitor and analyze from the output like cpu utilization etc.,

 

8.Query performance down?

Based on the query first will run the set show plan on to see how the query is being executed, and analyze the output, based on the output will tune the query, if necessary we should create indexes on the used tables.  And also based on the output I will check whether the optimizer is picking the right plan or not, run the optdiag to check when the last we had run the update statistics as optimization of the query depends on the statistics, run the sp_recompile, so that the stored procedures will pick the new plan based on the current statistics.

 

9.  What all the precautions you will take to avoid the same type of problem?

We never had an issue, I will document the thing with steps taken to resolve the

issue.

 

10.  If the time comes such that you had to take Important decision, but your reporting manager is not there, so how you will decide?

I will approach my project manager’s boss, will explain the situation and seek the permission from him, if he’s not available then I will take the call, and will keep all the application managers in the loop.

 

11. How do check the current running processes?

ps –eaf

 

12. Can u create your own sps for system wise?

Yes, we can, say for example we create the SPs to check the fragment level etc., etc.,

 

13.  What u need to do is issue an ASE kill command on the connection then un-suspend the db?

select lct_admin(“unsuspend”,db_id(“db_name”))

 

14.        What command helps you to know the process running on this port, but only su can run this command?

/var/tmp/lsof | grep 5300 (su)

netstat -anv | grep 5300 (anyone)

 

15.  For synchronizing the logins from lower version to higher version, just take the 11.9.2 syslogins structure, go to 12.5 higher version server?

create the table named as logins in the tempdb will this structure, run bcp in  into this login table, next use master to run the following commands, insert into syslogins select *,null,null from tempdb..logins

 

16. How to delete UNIX files which are more than 3 days old?

You must be in the parent directory of snapshots and execute the below command

 

find snapshots – type f -mtime  +3 –exec rm{};

find /backup/logs/ -name daily_backup* -mtime +21 -exec rm –f{};

 

17.  How to find the time taken for rollback of the processed?

kill 826 with statusonly

 

18. What is the difference between truncate_only & no_log?

 

(i)truncate_only:  It is used to truncate the log gracefully.  It checkpoints the database before the truncating the Database.  Truncate only – removes the inactive part of the log without making a backup copy.  Use on databases without log segments on a separate device from data segments.  Don’t specify a dump device or backup server name.  NOTE:Use dump transaction with no_log as a last resort and use it only after  dump transaction truncate_only fails.

 

(ii) no_log: Use no log when your transaction log is completely full  and no_log doesn’t

checkpoint the database before the dumping the log,no log  removes the inactive part of the log without making a backup copy, and without recording the procedure in the transaction log.  Use no log only when you have totally run out of the log space and can’t run usual dump transaction command.  Use no _log as last resort and use it only after dump transaction with truncate _only fails.

 

When  to use dump transaction that truncate_ only or with no_log

When the log is on the same segment as the data.  Dump transaction with truncate only to truncate the log.

You’re not concerned with the recovery of recent transactions ( for example, in an early development environment).  Dump transaction with truncate_only to truncate the log your usual method of dumping the transaction log (either the standard dump transaction command or dump transaction with truncate_only)  fails because of insufficient log space.  Dump transaction with no_log to truncate  the log without recording the event.

 

Note: dump database immediately afterward to copy the entire database, including the log.

NOTE: You should always use truncate_only. There are times when there is
absolutely no space left in the tran log, and you will have to use the
no_log option which truncates the tran log  but does not write into the
transaction log. A dump tran with truncate_only does write into the tran
log.

 

19.  Define Normalization?

It is a process of designing database schema, where in eliminating the redundancy of

columns and inconsistency of database.

 

Normalization is the process of breaking your data into separate components to reduce the repetition of data. Normalization can be up to 5 level. Each level of normalization reduce the repetition of data. it can be first/Second/Third and BCNF

 

Actually Normalization is the process of organizing data to minimize redundancy.
Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships. 
Basically you have to normalize your database upto 3 levels
1st normal form
2nd normal form
3rd normal form

Normalization is the process of organizing data to minimize redundancy is called normalization. The goal of database normalization is to decompose relations with anomalies in order to produce smaller, well-structured relations. Normalization usually involves dividing large, badly-formed tables into smaller, well-formed tables and defining relationships between them. 
(Reference Wiki )

The purpose of Database Normalization is to eliminate data redundancy and inconsistent dependency. The REDUNDANT data wastes disk spaces and creates maintenance problems. For example if the customer name is stored in more than one place then it must be changed/delete to all places at the time of update and delete. It also increases the processing time. Inconsistent dependency can make data difficult to access because the path to data is missing or broken.

There are certain rules for database normalization; each rule is called a normal form. If the first rule is observed then we can say the database is in first normal form, If first three rules are observed then we can say the database is considered to be Third Normal formal. There are other rules too like 4th Normal form, 5Th normal form.

First Normal Form

· Remove repeating group of information.

· Assign Primary key.

· Each attribute is atomic; it should not contain multiple values.

Second Normal Form

· Remove redundant data to a separate table

· Relate this table with foreign key.

Third Normal Form

· Remove columns that do not depend on Primary Key.

20.What are the types of normalization?

First, normal form

The rules for First Normal Form are:

Every column must be atomic.  It cannot be decomposed into two or more subcolumns.

You cannot have multivalued columns or repeating groups

Each row and column position can have only one value.

 

Second normal form

 

For a table to be in second normal form, every non-key field must depend on the entire primary key,  not on part of a composite primary key.  If a database has only single-field primary keys, it is automatically in Second normal form.

 

Third normal form

 

For Table to be in Third normal form, a non-key field cannot depend on another non-key field.

 

21. What are the precautions taken to reduce the down time?

disk mirroring or warm stand by.

 

22.  What are the isolation levels?, list different isolation levels in Sybase & what is default

To avoid the manual overriding of locking, we have transaction isolation level which is tied with transaction.

 

List of different isolation levels are isolation level 0,1,2,3.

Isolation level 0-This allows reading pages that currently are being modified. It allows dirty read

 

Isolation level 1- this allows read operation can only read pages. No dirty reads are allowed.

 

Isolation level 2-Allows a single page to be read many times within same transaction and guarantees that same value is read each time. This prevent other users to read

 

Isolation level 3- preventing another transaction from updating, deleting or inserting rows for pages previously read within transaction

Isolation level 1 is the default.

 

23.  What is optdiag?

The optdiag utility displays statistics  from the systabstats and systatistics  tables.  optdiag can also be used to update systatistics  information.  Only a sa can run the optdiag  (A command line tool for reading, writing and simulating table, index, and column statistics).

Advantages of optdiag

optdiag can display statistics for all the tables in a database, or for a single table

optdiag  output contains addition information useful for understanding query costs, such as index height and the average row length.

optdiag  is frequently used for other tuning tasks, so you should have these reports on hand

Disadvantages of optdiag

It produces a lot of output, so if you need only a single piece of information, such as the number of pages in the table, othermethods are faster and have lower systems overhead.

 

NOTE:What are the default character set and sort order after installation of Sybase ASE 15

 

The default character set is cp850, which supports the English language, upper case and lower case, and any special accent characters that are used in European languages.

 

The default sort order that goes with the characterset is binary, which is the fastest of sorts when building index structures or during execution of order by clauses.

 

 

24. How frequently you defrag the database?

     Whenever there are insertions, updations & deletions in a table we do defrag.

 

25. In 12.5 how to configure procedure cache?

     sp_cacheconfig

 

26.   What are the default page sizes in ASE 12.5?                                                                   

     Default page sizes are 2K,4K,8K,16K

 

28.   How do you see the performance of the Sybase server?

      Using sp_sysmon, sp_monitor, sp_who and sp_lock

 

27. What are the different types of shells?

     Bourne Shell, C-Shell, Korn-Shell

 

29. What is the difference between Bourne shell and K shell?

Bourne shell is a basic shell which is bundled with all UNIX file systems.  Where as

Korn shell is superset of Bourne shell.  It has got more added features like alias in

the longest name and longest file name.  It has got history command which can

display up to 200 commands.

 

30.  How do you see the CPU utilization on UNIX?

   using sar & top

 

 

31.  How to mount a file system?

    with  mount <file name>

 

32.   How do you get a port number?

     netstat –anv |grep 5000

     /var/tmp/lsof |grep 5300

 

33.   How do you check the long running transactions ?

      using syslogshold

 

34.   What is an Index? What are the types of Indexes?

Index is a separate storage segment created for the table.  There are two types of

Indexesthey are clustered index and non-clustered index.

 

Clustered Index. Vs Non-Clustered Indexes

 

Typically, a clustered index will be created on the primary key of a table, and non-clustered indexes are used where needed.

 

Non-clustered indexes

Leaves are stored in b-tree

Lower overhead on inserts, vs. clustered

Best for single key queries

Last of page index can become a ‘hot spot’

249 non cluster indexes per table

 

Clustered index

Records in table are sorted physically by key values

Only one clustered index per table

Higher overhead on inserts, if re-org on table is required

Best for queries requesting a range of records

Index must exist on same segment as table

 

Note:  With a “lock datapages” or “lock datarows”   clustered indexes are sorted physically only upon creation.  After that, the indexes behave like non-clustered index.

 

35.What is your challenging task?

Master database recovery

 

36.  What are the dbcc commands?

    The database consistency checker (dbcc) provides commands for checking the logical

and physical consistency of a database.  Two major functions of dbcc are:

 

Checkingpage linkage and data pointers at both page level and row level usingcheckstorageor checktable and checkdb.

 

Checking page allocation using checkstorage, checkalloc, checkverify, tablealloc and indexalloc,dbcccheckstorage, dbcc checktable, dbcc checkalloc, dbcc indexalloc, dbcc checkdb.

 

37. How to find on Object Name from a Page Number?

dbcc page(dbid,pageno)

 

38.  What is table partitioning?

Is splitting the large tables into smaller, with alter table (table name) partion#

39.  What is housekeeping task?

When ASE is idle; it raises the checkpoint that automatically flushes the dirty reads

from buffer to the disk.

 

40. What are the steps you take if your server process gets slow down?

It is an open-ended answer, as far as I am concerned

first I will check the network speed (ping -t)

then I see the errorlog

I check the indexes

I see the transaction log

tempdb

check when it run last update statistics, if it is not I will update the statistics followed by sp_recompile.

 

41. How do you check the Sybase server running from UNIX box?

a.   ps –ef |grep “server name”  &   showserver

 

42. What are the db_options?

trunk log on checkpointabort tran on log full, select into bulk copy / pll sort, single

user, dbo use only, no recovery on checkpoint

 

43. How do you recover the master database?

First I see that important system tables are taken dumps are clean.

Like  sysdevices, sysdatabases, sysusages, sysalternates, syslogins, sysloginroles

Then, I will build the new master device using buildmaster

I will shutdown the server

Restart the server with usermode -m  in runserverfile

Load the dumps of 5 important systables

Check the system tables dumped

Restart in normal mode.

 

44.  How do you know particular query is running?

set show plan on

 

45. How do you put master database in single-user mode?

using –m

 

46  How do you set the sa password?

In runserver file –Psa

 

47. What is hotspot?

Multiple transactions inserting in a single table

 

48.  How do you check the current run level in UNIX?

who –r

49. What is defncopy?

It is a utility, used to copy the definitions of all objects of a database.  From a database to an operating system file or from an operating system file to database.  Invoke the defncopy program directly from the operating system. defncopy provides a non-interactive way of copying out definitions (create statements) for views, rules, defaults, triggers, or procedures from a database to an operating system file.

 

50. What is bcp?

It is a utility to copy the data from a table to flat file and vice versa

 

51. What are the modes of bcp?

Fast bcp &  Slow bcp are two modes. bcp in works in one of two modes.

Slow bcp  – logs each row insert that it makes, used for tables that have one or more

indexes or triggers.

Fast bcp – logs only page allocation, copying data into tables without indexes or

triggers at fastest speed possible.

 

To determine the bcp mode that is best for your copying task, consider the

Size of the table into which you are copying data

Amount of data that you are copying in

Number of indexes on the table

Amount of spare database device space that you have for re-creating indexs

Fast bcp might enhance performance; however, slow bcp gives you greater data recoverability.

 

52. What  are the types in bcp?

bcp in & bcp out

 

 

 

53. What is defrag?

Defrag is deleting the indexes & recreating the indexes. Sothat the gap space will be

filled.

 

54.What is the prerequisite for bcp?

We need to set select into bulk copy.

 

55. What is slow bcp?

In this indexes will be on the table.

 

56.  What is fast bcp?

In this there won’t be any indexes on the table..

 

57. Will triggers fires during bcp?

No, trigger won’t fire during bcp.

 

58.What is primary key, foreign key and unique key?

PRIMARY KEY:A primary key is one which uniquely identifies a row 
of a table. this key does not allow null values and also 
does not allow duplicate values(Primary key is created on clustered index )
Foreigin Key:a foreign key is one which will refer to a 
primary key of another table
Unque key:A unique is one which uniquely identifies a row 
of a table, but there is a difference like it will not 
allow duplicate values and it will any number of  allow 
null values(In oracle). 
it allows only a single null value(In sql server 2000)
 
Both will function in a similar way but a slight difference 
will be there. So, declaring it as a primary key is the 
best one.(unique key is non clustered by default)

NOTE:Write a query to find the duplicate rows from a table?
Select name, count(*) from tablename group by name having count(*)>1

 

Give me the Global variable names for the description given below
1. Error number reported for last SQL statement ( @@error)
2. Current transaction mode (chained or unchained)(@@tranchained)
3. Status of previous fetch statement in a cursor(@@sqlstatus)
4. Transaction nesting level(@@trancount)
5. Current process ID(@@spid)

 

What is the difference between a sub-query and a correlated sub-query?
Ans: A subquery is a query that SQL Server must evaluate before it can process the main query. It doesn’t depend on a value in the outer subquery. 
A correlated subquery is the one that depends on a value in a join clause in the outer subquery.

 

 What command do we use to rename a database?
Ans: sp_renamedb ‘oldname’ , ‘newname’ 
Well sometimes sp_renamedb may not work you know because if some one is using the db it will not accept this command so what do you think you can do in such cases? – In such cases we can first bring to db to single user using sp_dboptions and then we can rename that db and then we can rerun the sp_dboptions command to remove the single user mode.

 

What is default? Is there a column to which a default can’t be bound?
Ans: A default is a value that will be used by a column, if no value is supplied to that column while inserting data. IDENTITY columns and timestamp columns can’t have defaults bound to them. See CREATE DEFAULT in books online

 

What is the difference between static and dynamic configuration parameter in Sybase?

In Sybase ASE,when a dynamic configuration parameter is modified the effect takes place immediately.When a static parameter is modified,the server must be rebooted for the effect to take place.

NOTE:What does the command sp_helpconfig ‘number of user connections’ 100 return?

It returns the amount of memory that will be taken by the Sybase ASE server if the parameter is set to that value.

59.   What is candidate key, alternate key & composite key?

Candidate key: A primary key or unique constraint column.  A table can have multiple candidate keys.

Alternate key:  Alternate key is a key which is declared as a second key in composite key.

Composite key:  An index key that includes two or more columns; for example authors(au_lname,au_fname)

Candidate Key – A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key.

Primary Key – A Primary Key is a column or a combination of columns that uniquely

identify a record. Only one Candidate Key can be Primary Key.

Composite key -A primary key that consistsof two or more attributes is known as   

 Compositekey

Alternate Key- Any of the candidate keys that is not part of the primary key is called an  
alternate key.

 

60. What’s the different between a primary key and unique key?

Both primary key and unique enforce uniqueness of the column on which they are

define.  But by default, primary key creates a clustered index on the column, where are

unique creates a nonclustered index by default.  Another major difference  is that,

primary key doesn’t allow NULLs, but unique key allows one NULL only.

 

61.  How do you trace H/W signals?

with TRAP command.

 

62. What is a natural key?

A natural key is a key for a given table that uniquely identifies the row.

 

63. What are the salient features of 12.5?

 i)  different logical page sizes (2,4,8,16k)

ii)  data migration utility is there.

iii)  default database sybsystemdb is added.

iv)  compressing the datafiles in a backup server.

v)   wider columns

vi)   large number of rows

vii)  in version 12 we have buildserver, here we have dataserver

 

64.  What are different statistic commands  you use in UNIX?

i/o stat, netstat, vmstat, mpstat, psrstat

 

65. What do you mean by query optimization?

It is nothing but assigning indexes to a table, so that query optimizer will prepare a query

plan for a table & update the values in a table.  With this performance increases.

 

66.   What are locks?

A concurrency control mechanism that protects the integrity of data and transaction

results in a multi-user environment.  Adaptive Server applies page or table locks to

prevent two users from attempting to change  the same data at the same time, and to

prevent processes that are selecting data from reading data that is in the process of

being changed.

 

67.   What are levels of lock?

There are three types of locks:

* page locks

* table locks

* demand locks

 

Page Locks

There are three types of page locks:

 

* shared

* exclusive

* update

 

shared

These locks are requested and used by readers of information. More than one connection can hold a shared lock on a data page.

 

This allows for multiple readers.

exclusive

The SQL Server uses exclusive locks when data is to be modified. Only one connection may have an exclusive lock on a given data page. If a table is large enough and the data is spread sufficiently, more than one connection may update different data pages of a given table simultaneously.

update

A update lock is placed during a delete or an update while the SQL Server is hunting for the pages to be altered. While an update lock is in place, there can be shared locks thus allowing for higher throughput.

 

The update lock(s) are promoted to exclusive locks once the SQL Server is ready to perform the delete/update.

Table Locks

There are three types of table locks:

 

* intent

* shared

* exclusive

 

 

 

intent

Intent locks indicate the intention to acquire a shared or exclusive lock on a data page. Intent locks are used to prevent other transactions from acquiring shared or exclusive locks on the given page.

shared

This is similar to a page level shared lock but it affects the entire table. This lock is typically applied during the creation of a non-clustered index.

exclusive

This is similar to a page level exclusive lock but it affects the entire table. If an update or delete affects the entire table, an exclusive table lock is generated. Also, during the creation of a clustered index an exclusive lock is generated.

Demand Locks

A demand lock prevents further shared locks from being set. The SQL Server sets a demand lock to indicate that a transaction is next to lock a table or a page.

 

This avoids indefinite postponement if there was a flurry of readers when a writer wished to make a change

 

68.    What is deadlock ?

A dead lock occurs when two or more user processes each have a lock on a separate page or table and each wants to acquire a lock on other process’s page or table.  The transaction with the least accumulated CPU time is killed and all of its work is rolled back.

 

69.   What is housekeeper?

The housekeeper is a task that becomes active when no other tasks are active.  It writes dirty pages to disk, reclaims lost space, flushes statistics to systabstats and checks license usage.

 

70.  What are work tables?  What is the limit?

work tables are created automatically in tempdb in Adaptive server merge joins, sorts

and other internal processes.  There is a limit for work tables to 14.  System will create

max of 14 work tables for a query.

 

71.  What is update statistics?

Updates information about distribution of key values in specified indexes or for

specified columns, for all columns in an index or for all columns in a table.

 

Usage: ASE keeps statistics about the distribution of the key values in each index, and uses these statistics in its decisions about which indexes to use in query processing.

 

Syntax:  update statistics table_name [[index_name]| [(column_list)]]

[ using step values]

[ with consumers = consumers ]

 

update index statistics table_name [index_name]

[ using step values]

[ with consumers = consumers ]

The update statistics command helps the server
make the best decisions about which indexes to use when it processes a query, by providing information about the distribution of the key values in the indexes. The update statistics commands create statistics, if there are no statistics for a particular column, or replaces existing statistics if they already exist. The statistics are stored in the system tables systabstats and sysstatistics.

 

 

72.  What is sp_recompile?

Causes each stored procedure and trigger that uses the named table to be recompiles the next time it runs.

Usage:  The queries used by stored procedure and triggers are optimized only once, when they are compiled.  As you add indexes or make other changes to your database that affect its statistics, your compiled stored procedures and triggers may lose efficiency.  By recompiling the stored procedures and triggers that act on a table, you can optimize the queries for maximum efficiency.

 

73.  What is a difference between a segment and a device?

A device is, well, a device: storage media that holds images of logical pages. A device will have a row in the sysdevices table.

 

A fragment is a part of a device, indicating a range of virtual page  numbers that have been assigned to hold the images of a range of logical page numbers belonging to one particular database. A fragment is represented by a row  in sysusages.

A segment is a label that can be attached to fragments. Objects can be associated with a particular segment (technically, each indid in sysindexes can be associated with a different segment). When future space is needed for the object, it will only be allocated from the free space on fragments that are labeled with that segment.

There can be up to 32 segments in a database, and each fragment can be associated with any, all, or none of them (warnings are raised if there are no segments associated). Sysusages has a column called segmap which is a bitmapped index of which segments are associated, this maps to the syssegments table.

What is a segment?

A segment is a label that points to one or more database devices. Segment names are used in create table and create index commands to place tables or indexes on specific database devices. Using segments can improve Adaptive Server performance and give the System Administrator or Database Owner increased control over the placement, size, and space usage of database objects.

You create segments within a database to describe the database devices that are allocated to the database. Each Adaptive Server database can contain up to 32 segments, including the system-defined segments Before assigning segment names, you must initialize the database devices with disk init and then make them available to the database with create database or alter database.

Transaction log management in sybase

Sybase Transaction Log Management

Contents

* About transaction logs

* Turning off transaction logging

* What information is logged

* Sizing the transaction log

* Separating data and log segments

* Truncating the transaction log

* Managing large transactions

 

 

About Transaction Logs

 

Most SQL Server processing is logged in the transaction log table, syslogs. Each database, including the system databases master, model, sybsystemprocs, and tempdb, has its own transaction log. As modifications to a database are logged, the transaction log continues to grow until it is truncated, either by a dump transaction command or automatically if the trunc log on chkpt option is turned on as described below. This option is not recommended in most production environments where transaction logs are needed for media failure recovery, because it does not save the information contained in the log.

 

The transaction log on SQL Server is a write-ahead log. After a transaction is committed, the log records for that transaction are guaranteed to have been written to disk. Changes to data pages may have been made in data cache but may not yet be reflected on disk.

 

WARNING!

This guarantee cannot be made when UNIX files are used as SYBASE devices.

 

Transaction Logs and commit transaction

 

When you issue a commit transaction, the transaction log pages are immediately written to disk to ensure recoverability of the transaction. The modified data pages in cache might not be written to disk until a checkpoint is issued by a user or SQL Server or periodically as the data cache buffer is needed by other SQL Server users. Note that pages modified in data cache can be written to disk prior to the transaction committing, but not before the corresponding log records have been written to disk. This happens if buffers in data cache containing dirty pages are needed to load in a new page.

 

Transaction Logs and the checkpoint Process

 

If the trunc log on chkpt option is set for a database, SQL Server truncates the transaction log for the database up to the page containing the oldest outstanding transaction when it issues a checkpoint in that database. A transaction is considered outstanding if it has not yet been committed or rolled back. A checkpoint command issued by a user does not cause truncation of the transaction log, even when the trunc log on chkpt option is set. Only implicit checkpoints performed automatically by SQL Server result in this truncation. These automatic checkpoints are performed using the internal SQL Server process called the checkpoint process.

 

The checkpoint process wakes up about every 60 seconds and cycles through every database to determine if it needs to perform a checkpoint. This determination is based on the recovery interval configuration parameter and the number of rows added to the log since the last checkpoint. Only those rows associated with committed transactions are considered in this calculation.

 

If the trunc log on chkpt option is set, the checkpoint process attempts to truncate the log every sixty seconds, regardless of the recovery interval or the number of log records. If nothing will be gained from this truncation, it is not done.

 

Transaction Logs and the recovery interval

 

The recovery interval is a configuration parameter that defines the amount of time for the recovery of a single database. If the activity in the database is such that recovery would take longer than the recovery interval, the SQL Server checkpoint process issues a checkpoint. Because the checkpoint process only examines a particular database every 60 seconds, enough logged activity can occur during this interval that the actual recovery time required exceeds the time specified in the recovery interval parameter.

 

Note that the transaction log of the tempdb database is automatically truncated during every cycle of the checkpoint process, or about every 60 seconds. This occurs whether the trunc log on chkpt option is set on tempdb or not.

 

Turning Off Transaction Logging

 

Transaction logging performed by SQL Server cannot be turned off, to ensure the recoverability of all transactions performed on SQL Server. Any SQL statement or set of statements that modifies data is a transaction and is logged. You can, however, limit the amount of logging performed for some specific operations, such as bulk copying data into a database using bulk copy (bcp) in the fast mode, performing a select/into query, or truncating the log. See the Tools and Connectivity Troubleshooting Guide and the SQL Server Reference Manual for more information on bcp. These minimally logged operations cause the transaction log to get out of sync with the data in a database, which makes the transaction log useless for media recovery.

 

Once a non-logged operation has been performed, the transaction log cannot be dumped to a device, but it can still be truncated. You must do a dump database to create a new point of synchronization between the database and the transaction log to allow the log to be dumped to device.

 

What Information Is Logged

 

When a transaction is committed, SQL Server logs every piece of information relating to the transaction in the transaction log to ensure its recoverability. The amount of data logged for a single transaction depends on the number of indexes affected, the amount of data changed, and the number of pages that must be allocated or deallocated. Certain other page management information may also be logged. For example, when a single row is updated, the following types of records may be placed in the transaction log:

 

* A data delete record, including all the data in the original row.

* A data insert record, including all the data in the modified row.

* One index delete record per index affected by the change.

* One index insert record per index affected by the change.

* One page allocation record per new data/index page required.

* One page deallocation record per data/index page freed.

 

Sizing the Transaction Log

 

There is no hard and fast rule dictating how big a transaction log should be. For new databases, a log size of about 20 percent of the overall database size is a good starting point. The actual size required depends on how the database is being used; for example:

 

* The rate of update, insert, and delete transactions

* The amount of data modified per transaction

* The value of the recovery interval configuration parameter

* Whether or not the transaction log is being saved for media recovery purposes

 

Because there are many factors involved in transaction logging, you usually cannot accurately determine in advance how much log space a particular database requires. The best way to estimate this size is to simulate the production environment as closely as possible in a test. This includes running the applications with the same number of users as will be using the database in production.

 

Separating Data and Log Segments

 

Always store transaction logs on a separate database device and segment from the actual data. If the data and log are on the same segment, you cannot save transaction log dumps. Up-to-date recovery after a media failure is therefore not possible. If the device is mirrored, however, you may be able to recover from a hardware failure. Refer to the System Administration Guide for more information.

 

Also, the data and log segments must be on separate segments so that you can determine the amount of log space used. dbcc checktable on syslogs only reports the amount of log space used and what percentage of the log is full if the log is on its own segment.

 

Finally, because the transaction log is appended each time the database is modified, it is accessed frequently. You can increase performance for logged operations by placing the log and data segments on different physical devices, such as different disks and controllers. This divides the I/O requests for a database between two devices.

 

Truncating the Transaction Log

 

The transaction log must be truncated periodically to prevent it from filling up. You can do this either by enabling the trunc log on chkpt option or by regularly executing the dump transaction command.

 

WARNING!

Up-to-the-minute recoverability is not guaranteed on systems when the trunc log on chkpt option is used. If you use this on production systems and a problem occurs, you will only be able to recover up to your last database dump.

 

Because the trunc log on chkpt option causes the equivalent of the dump transaction with truncate_only command to be executed, it truncates the log without saving it to a device. Use this option only on databases for which transaction log dumps are not being saved to recover from a media failure, usually only development systems.

 

Even if this option is enabled, you might have to execute explicit dump transaction commands to prevent the log from filling during peak loads.

 

If you are in a production environment and using dump transaction to truncate the log, space the commands so that no process ever receives an 1105 (out of log space) error.

 

When you execute a dump transaction, transactions completed prior to the oldest outstanding transaction are truncated from the log, unless they are on the same log page as the last outstanding transaction. All transactions since the earliest outstanding transaction are considered active, even if they have completed, and are not truncated.

 

Figure 1 illustrates active and outstanding transactions:

 

 

Figure: Active transactions and outstanding transactions illustrated

 

This figure shows that all transactions after an outstanding transaction are considered active. Note that the page numbers do not necessarily increase over time.

 

Because the dump transaction command only truncates the inactive portion of the log, you should not allow stranded transactions to exist for a long time. For example, suppose a user issues a begin transaction command and never commits the transaction. Nothing logged after the begin transaction can be purged out of the log until one of the following occurs:

 

* The user issuing the transaction completes it.

* The user process issuing the command is forcibly stopped, and the transaction is rolled back.

* SQL Server is shut down and restarted.

 

Stranded transactions are usually due to application problems but can also occur as a result of operating system or SQL Server errors. See, “Managing Large Transactions”, below, for more information.

 

Identifying Stranded Transactions with syslogshold

In SQL Server release 11.0 and later, you can query the syslogshold system table to determine the oldest active transaction in each database. syslogshold resides in the master database, and each row in the table represents either:

 

* The oldest active transaction in a database or

* The Replication Server® truncation point for the database’s log.

 

A database may have no rows in syslogshold, a row representing one of the above, or two rows representing both of the above. For information about how Replication Sever truncation points affects the truncation of a database’s transaction log, see your Replication Server documentation.

 

Querying syslogshold can help you when the transaction log becomes too full, even with frequent log dumps. The dump transaction command truncates the log by removing all pages from the beginning of the log up to the page that precedes the page containing an uncommitted transaction record (the oldest active transaction). The longer this active transaction remains uncommitted, the less space is available in the transaction log, since dump transaction cannot truncate additional pages.

 

For information about how to query syslogshold to determine the oldest active transaction that is holding up your transaction dumps, see Backing Up and Restoring User Databases in the System Administration Guide.

 

Managing Large Transactions

Because of the amount of data SQL Server logs, it is important to manage large transactions efficiently. Four common transaction types can result in extensive logging:

 

* Mass updates

* Deleting a table

* Insert based on a subquery

* Bulk copying in

 

The following sections contain explanations of how to use these transactions so that they do not cause extensive logging.

 

Mass Updates

The following SQL statement updates every row in the large_tab table. All of these individual updates are part of the same single transaction.

 

1> update large_tab set col_1 = 0

2> go

 

On a large table, this query results in extensive logging, often filling up the transaction log before completing. In this case, an 1105 error (transaction log full) results. The portion of the transaction that was processed is rolled back, which can also require significant server resources.

 

Another disadvantage of unnecessarily large transactions is the number or type of locks held. An exclusive table lock is normally acquired for a mass update, which prevents all other users from modifying the table during the update. This may cause deadlocks.

 

You can sometimes avoid this situation by breaking up large transactions into several smaller ones and executing a dump transaction between the different parts. For example, the single update statement above could be broken into two or more pieces as follows:

 

1> update large_tab set col1 = 0

2> where col2 < x

3> go

 

1> dump transaction database_name

2> with truncate_only

3> go

 

1> update large_tab set col1 = 0

2> where col2 >= x

3> go

 

1> dump transaction database_name

2> with truncate only

3> go

 

This example assumes that about half the rows in the table meet the condition col2 < x and the remaining rows meet the condition col2 >= x.

 

If transaction logs are saved for media failure recovery, the log should be dumped to a device and the with truncate_only option should not be used. Once you execute a dump transaction with truncate_only, you must dump the database before you can dump the transaction log to a device.

 

Delete Table

The following SQL statement deletes the contents of the large_tab table within a single transaction and logs the complete before-image of every row in the transaction log:

 

1> delete table large_tab

2> go

 

If this transaction fails before completing, SQL Server can roll back the transaction and leave the table as it was before the delete. Usually, however, you do not need to provide for the recovery of a delete table operation. If the operation fails halfway through, you can simply repeat it and the result is the same. Therefore, the logging done by an unqualified delete table statement may not always be needed.

 

You can use the truncate table command to accomplish the same thing without the extensive logging:

 

1> truncate table large_tab

2> go

 

This command also deletes the contents of the table, but it logs only space deallocation operations, not the complete before- image of every row.

 

Insert Based on a Subquery

The SQL statement below reads every row in the large_tab table and inserts the value of columns col1 and col2 into new_tab, all within a single transaction:

 

1> insert new_tab select col1, col2 from

large_tab

2> go

 

Each insert operation is logged, and the records remain in the transaction log until the entire statement has completed. Also, any locks required to process the inserts remain in place until the transaction is committed or rolled back. This type of operation may fill the transaction log or result in deadlock problems if other queries are attempting to access new_tab. Again, you can often solve the problem by breaking up the statement into several statements that accomplish the same logical task. For example:

 

1> insert new_tab

2> select col1, col2 from large_tab where col1

<= y

3> go

 

1> dump transaction database_name

2> with truncate_only

3> go

 

1> insert new_tab

2> select col1, col2 from large_tab where col1

> y

3> go

 

1> dump transaction database_name

2> with truncate_only

3> go

 

Note

This is just one example of several possible ways to break up a query.

 

This approach assumes that y represents a median value for col1. It also assumes that null values are not allowed in col1. The inserts run significantly faster if a clustered index exists on large_tab.col1, although it is not required.

 

If transaction logs are saved for media failure recovery, the log should be dumped to a device and the with truncate_only option should not be used. Once you execute a dump transaction with truncate_only, you must dump the database before you can dump the transaction log to a device.

 

Bulk Copy

You can break up large transactions when using bcp to bulk copy data into a database. If you use bcp without specifying a batch size, the entire operation is performed as a single logical transaction. Even if another user process does a dump transaction command, the log records associated with the bulk copy operation remain in the log until the entire operation completes and another dump transaction command is performed. This is one of the most common causes of the 1105 error. You can avoid it by breaking up the bulk copy operation into batches. Use this procedure to ensure recoverability:

 

1. Turn on the trunc log on chkpt option:

 

1> use master

2> go

 

1> sp_dboption database_name,

2> trunc, true

3> go

 

1> use database_name

2> go

 

1> checkpoint

2> go

 

Note

“trunc” is an abbreviated version of the option trunc log on chkpt.

 

2. Specify the batch size on the bcp command line. This example copies rows into the pubs2.authors table in batches of 100:

UNIX    bcp -b 100

3. Turn off the trunc log on chkpt option when the bcp operations are complete, and dump the database.

 

In this example, a batch size of 100 rows is specified, resulting in one transaction per 100 rows copied. You may also need to break the bcp input file into two or more separate files and execute a dump transaction between the copying of each file to prevent the transaction log from filling up.

 

If the bcp in operation is performed in the fast mode (with no indexes or triggers), the operation is not logged. In other words, only the space allocations are logged, not the complete table. The transaction log cannot be dumped to a device in this case until after a database dump is performed (for recoverability).

 

If your log is too small to accommodate the amount of data being copied in, you may want to do batching and have the sp_dboption trunc log on checkpoint set. This will truncate the log after each checkpoint.

 

Sybase Tempdb space management and addressing tempdb log full issues

 

 

A default installation of Sybase ASE has a small tempdb located on the master device. Almost all ASE implementations need a much larger temporary database to handle sorts and worktables and therefore DBA’s need to increase tempdb. This document gives some recommendations how this can be done and describes various techniques to guarantee maximum availability of tempdb.

Contents

 

 

* 1 About Segments

* 2 Prevention of a full logsegment

* 3 Default or system segments are full

* 4 Prevention of a full segment for data

* 5 Separation of data and log segments

* 6 Using the dsync option

* 7 Moving tempdb off the master device

* 8 Summary of the recommendations

 

 

About Segments

 

Tempdb is basically just another database within the server and has three segments (What’s a segment): ‘system’ for system tables like sysobjects and syscolumns, ‘default’ to store objects such as tables and ‘logsegment’ for the transaction log (syslogs table). This type of segmentation, no matter the size of the database, has an undefined space for the transaction log; the only limitation is the available size within the database. The following script illustrates that this can lead to nasty problems.

 

create table #a(a char(100) not null)

go

declare @a int

 

select @a = 1

 

while @a > 0

begin

insert into #a values(“get full”)

end

go

 

Running the script populates table #a and the transaction log at the same time, until tempdb is full. Then the log gets automatically truncated by ASE, allowing for more rows to be inserted in the table until tempdb is full again. This cycle repeats itself a number of times until tempdb is filled up to the point that even the transaction log cannot be truncated anymore. At that point the ASE errorlog will show messages like 1 task(s) are sleeping waiting for space to become available in the log segment for database tempdb. When you log on to ASE to resolve this problem and you run an sp_who, you will get Failed to allocate disk space for a work table in database ‘tempdb’. You may be able to free up space by using the DUMP TRANsaction command, or you may want to extend the size of the database by using the ALTER DATABASE command.

 

Your first task is to kill off the process that causes the problem, but how can you know which process to kill if you even can’t run an sp_who? This problem can be solved with the lct_admin function. In the format lct_admin(“abort”,0,) you can kill sessions that are waiting on a log suspend. So you do a:

 

select lct_admin(“abort”,0,2) — 2 is dbid for tempdb.

 

When you execute the lct_admin function the session is killed but tempdb is still full. In fact it’s so full that the table #a cannot be dropped because this action must also be logged in the transaction log of tempdb. Besides a reboot of the server you would have no other option than to increase tempdb (alter database)with just a bit more space for the logsegment.

 

alter database tempdb log on =

 

This extends tempdb and makes it possible to drop table #a and to truncate the transaction log. In a real-life situation this scenario could cause significant problems for users.

 

Prevention of a full logsegment

 

One of the database options that can be set with the sp_dboption stored procedure can be used to prevent this. When you do:

 

sp_dboption tempdb,”abort tran on log full”,true

 

(for pre 12.5.1: followed by a checkpoint in tempdb) the transaction that fills up the transaction log in tempdb is automatically aborted by the server.

[edit]

Default or system segments are full

 

The default or system segments in tempdb, where the actual data is stored, can also get full, just like any ordinary database. Your query is cancelled with a Msg 1105: Can’t allocate space for object ‘#a_____00000180017895422′ in database ‘tempdb’ because ‘default’ segment is full/has no free extents. If you ran out of space in syslogs, dump the transaction log. Otherwise, use ALTER DATABASE or sp_extendsegment to increase size of the segment. This message can be caused by a query that creates a large table in tempdb, or an internal worktable created by ASE used for sorts, etc. Potentially, this problem is much worse than a full transaction log since the transaction is cancelled. A full log segment leads to “sleeping” processes until the problem is resolved. However, a full data segment leads to aborted transactions.

 

 

 

Prevention of a full segment for data

 

The Resource Governor in ASE allows you to deal with these circumstances. You can specify just how much space a session is allowed to consume within tempdb. When the space usage exceeds the specified limit the session is given a warning or is killed. Before using this feature you must configure ASE (with sp_configure)to use the Resource Governor:

 

sp_configure “allow resource limits”,1

 

After a reboot of the server (12.5.1. too) you can use limits: (sp_add_resource_limit)

 

sp_add_resource_limit “petersap”,null,”at all times”,”tempdb_space”,200

 

This limit means that the user petersap is allowed to use 200 pages within tempdb. When the limit is exceeded the session receives an error message (Msg 11056) and the query is aborted. Different options for sp_add_resource_limit make it possible to kill the session when the limit is exceeded. Just how much pages a user should be allowed to use in tempdb depends on your environment. Things like the size of tempdb, the number of concurrent users and the type of queries should be taken into account when setting the resource limit. When a resource limit for tempdb is crossed it is logged into the Sybase errorlog. This makes it possible to trace how often a limit is exceeded and by who. With this information the resource limit can be tuned. When you use multiple temporary databases the limit is enforced on all of these.

 

Separation of data and log segments

 

For performance reasons it makes sense to separate the system+default and the logsegment from each other. Not all sites follow this policy. It’s a tradeoff between flexibility to have data and log combined and some increased performance. Since tempdb is a heavily used database its not a bad idea to invest some time into an investigation of the space requirements. The following example illustrates how tempdb could be configured with separate devices for the logsegment and the data. The example is based on an initial setting of tempdb on the master device. First we increase tempdb for the system and data segments:

 

alter database tempdb on =

 

Then we extend tempdb for the transaction log:

 

alter database tempdb log on =

 

When you have done this and run an “sp_helpdb tempdb” you will see that data and log are still on the same segment. Submit the following to resolve this: (sp_logdevice)

 

sp_logdevice tempdb,

 

Please note that tempdb should not be increased on the master device.

 

Using the dsync option

 

The dsync option for devices allows you to enable/disable I/O buffering to file systems. The option is not available for raw partitions and NT files. To get the maximum possible performance for tempdb use dedicated device files, created with the Sybase disk init command. The files should be placed on file system, not on raw partitions. Set the dsync option off as in the following example: (disk init)

 

disk init name = “tempdb_data”,

size= “500M”,

physname= “/var/sybase/tempdb_data.dat”,

dsync = false

 

 

Moving tempdb off the master device

 

When you have increased tempdb on separate devices you can configure tempdb so that the master device is unused. This increases the performance of tempdb even further. There are various techniques for this, all with their pros and cons but I recommend the following. Modify sysusages so that segmap will be set to 0 for the master device. In other words, change the segments of tempdb so that the master device is unused. This can be done with the following statements:

 

sp_configure “allow updates to system tables”,1

go

update master..sysusages

set segmap = 0

where dbid = 2

and lstart = 0

go

sp_configure “allow updates to system tables”,0

go

shutdown — reboot now!

go

 

When you use this configuration you should know the recovery procedure just in case one of the devices of tempdb gets corrupted or lost. Start your ASE in single user mode by adding the -m switch to the dataserver options. Then submit the following statements:

 

update master..sysusages

set segmap = 7

where dbid = 2

and lstart = 0

go

delete master..sysusages

where dbid = 2

and lstart > 0

go

shutdown — reboot now!

go

 

Remove the -m switch from the dataserver options and restart ASE. Your tempdb is now available with the default allocation on the master device.

 

Summary of the recommendations

 

* Increase tempdb from it’s initial size to a workable value

* Set the option “abort tran on log full” for tempdb to on

* Create resource limits

* Place data and log segments on separate devices

* Place tempdb on filesystem with dsync set to false

* Move tempdb off the master device by modifying the segmap attribute

 

74. Do we have to create sp_thresholdaction procedure on every segment or every database or any other place!?

You don’t *have* to create threshold action procedures for any segment, but you *can*  define thresholds on any segment. The log segment has a default “last  chance” threshold set up that will call a procedure called “sp_thresholdaction”. It is a good idea to define sp_thresholdaction, but you don’t have to – if you don’t you will just get a “proc not found” error when the log fills up and will have to take care of it manually.

Thresholds are created only on segments, not on devices or databases. You can create
them in sysprocedures with a name starting like “sp_” to have multiple databases share
the same procedure, but often each database has its own requirements so they are 
created locally instead.

Determining Free Log Space in Sybase ASE

Determining Unused Log Space

 

Use dbcc checktable(syslogs) for an accurate check of free space in Sybase Adaptive Server Enterprise. -

 

[SUB]A bit old though[/SUB]

 

Contents

 

When you need to check free space in the server logs, users typically use the stored procedure sp_helpdb. While sp_helpdb is useful for a general estimation of free space, for a precise figure use one of the following methods:

 

* dbcc checktable (syslogs)

* Determine the number of data pages in the transaction log via isql script, for example:

 

select data_pgs (8, doampg)

from sysindexes where id=8

go

 

Each method has advantages.

 

Sybase recommends sp_helpdb for most situations because it reports quickly. sp_helpdb uses the unreserved page count in sysusages. However, unreserved page count is updated intermittently and therefore may not accurately reflect the actual state of the database. Thus, when sp_helpdb reports free space, when you perform an insert you may run out of space, resulting in error message 1105, which reads in part:

 

Can’t allocate space for object … because log segment full

 

If this error occurs, follow the instructions in Runtime 1105 Errors: State 3 in the “Error Message Writeups” chapter of the Adaptive Server Enterprise Troubleshooting and Error Messages Guide.

 

The dbcc checktable (syslogs) command also checks for possible corruption as well as the size of the log. However, it can take a long time to run, depending on the size of the log. For more information about dbcc checktable, see the chapter, “Checking Database Consistency” in the Adaptive Server Enterprise System Administration Guide.

 

The isql script is more accurate than sp_helpdb. It is described in the Error 1105 section in “Error Message Writeups” chapter of the Adaptive Server Enterprise Troubleshooting and Error Messages Guide.

75.When to run a reorg command?

reorg is useful when:

 

A large number of forwarded rows causes extra I/O during read operations.

 

•Inserts and serializable reads are slow because they encounter pages with noncontiguous free space that needs to be reclaimed.

 

Large I/O operations are slow because of low cluster ratios for data and index pages.

 

•sp_chgattribute was used to change a space management setting (reservepagegap, fillfactor, or exp_row_size) and the change is to be applied to all existing rows and pages in a table, not just to future updates.

 

76.  What are the most important DBA tasks?

In my opinion, these are (in order of importance): (i) ensure a proper database / log dump schedule for all databases (including master); (ii) run dbcc checkstorage on all databases regularly (at lease weekly), and follow up any corruption problems found; (iii) run update [index] statistics at least weekly on all user tables; (iv) monitor the server errorlog for messages indicating problems (daily).  Of course, a DBA has many other things to do as well, such as supporting users & developers, monitor performance, etc.,

 

77. What is bit datatype and what’s the information that can be stored inside a bit column?

bit datatype is used to store Boolean information like 1 or 0 (true or false).  Until SQL Server 6.5 bit datatype could hold either a 1 or 0 and there was no support for NULL.  But from SQL Server 7.0 onwards, bit datatype can represent a third state, which is NULL.

 

78. What are different types of triggers?

Trigger is an event.  That gets fires when an event occurs, such as Insert, Delete, Update.  There are 3 types of triggers available with Sybase.

 

How triggers work in Sybase

Triggers: Enforcing Referential Integrity

 

How triggers work

 

Triggers are automatic. They work no matter what caused the data modification—a clerk’s data entry or an application action. A trigger is specific to one or more of the data modification operations (update, insert, and delete), and is executed once for each SQL statement.

 

For example, to prevent users from removing any publishing companies from the publishers table, you could use this trigger:

 

create trigger del_pub

on publishers

for delete

as

begin

rollback transaction

print “You cannot delete any publishers!”

end

 

The next time someone tries to remove a row from the publishers table, the del_pub trigger cancels the deletion, rolls back the transaction, and prints a message.

 

A trigger “fires” only after the data modification statement has completed and Adaptive Server has checked for any datatype, rule, or integrity constraint violation. The trigger and the statement that fires it are treated as a single transaction that can be rolled back from within the trigger. If Adaptive Server detects a severe error, the entire transaction is rolled back.

 

Triggers are most useful in these situations:

 

*          Triggers can cascade changes through related tables in the database. For example, a delete trigger on the title_id column of the titles table can delete matching rows in other tables, using the title_id column as a unique key to locating rows in titleauthor and roysched.

*          Triggers can disallow, or roll back, changes that would violate referential integrity, canceling the attempted data modification transaction. Such a trigger might go into effect when you try to insert a foreign key that does not match its primary key. For example, you could create an insert trigger on titleauthor that rolled back an insert if the new titleauthor.title_id value did not have a matching value in titles.title_id.

*          Triggers can enforce restrictions that are much more complex than those that are defined with rules. Unlike rules, triggers can reference columns or database objects. For example, a trigger can roll back updates that attempt to increase a book’s price by more than 1 percent of the advance.

*          Triggers can perform simple “what if” analyses. For example, a trigger can compare the state of a table before and after a data modification and take action based on that comparison.

Triggers in Sybase

Trigger is a special type of SP that gets executed automatically when any DML operation takes place on a table.

 

* Triggers are used to enforce referential integrity.

* Triggers are used to cascade changes to related tables.

* Triggers can be used to apply complex restrictions than that enforced using rules.

* Trigger can perform analysis before and after changes to the table.

 

Triggers cannot have the following:

 

1. create and drop commands.

2. alter table, alter database, truncate table.

3. Load database and transactions.

4. Grant and revoke statements.

5. update statistics

6. reconfigure

7. disk init, disk mirror, disk refit, disk reinit, disk remirror, disk unmirror

8. select into

 

How to Create Trigger in Sybase

 

create trigger emp_trigger

on emp

for insert, update, delete

as

 

Trigger Example

 

create trigger emp_trigger

on emp

for delete

as

delete payment

from payment, deleted

where payment.empid = deleted.empid

79.  How many triggers will be fired if more than one row is inserted?

The numbers of rows you are inserting into a table, that many number of times trigger gets fire.

 

80.  What is advantage of using triggers?

To maintain the referential integrity.

 

81.  How do you optimize a stored procedure?

By creating appropriate indexes on tables.  Writing a query based on the index and how to pick up the appropriate index.

 

82.   How do you optimize a select statement?

Using the SARG’s in the where clause,  checking the query plan using the set show plan on.  If the query is not considering the proper index, then will have to force the correct index to run the query faster.

 

83. How do you force a transaction to fail?

By killing a process you can force a transaction to fail.

 

84.  What are constraints?  Explain different types of constraints?

Constraints enable the RDBMS enforce the integrity of the database automatically,  without needing you to create triggers, rule or defaults.

Types of constraints:  NOT NULL, CHECK, UNIQUE, PRIMARY KEY, FOREIGN KEY

 

85.  What are the steps you will take to improve performance of a poor performing query?

 

This is very open ended question and there could be a lot of reasons behind the poor performance of a query.  But some general issues that you could talk about would be:  No indexes, table scans, missing or out of date statistics, blocking, excess recompilations of stored procedures, procedures and triggers without SET NOCOUNT ON, poorly written query with unnecessarily complicated joins,  too much normalization, excess usage of cursors and temporary tables.

 

Some of tools /ways that help you trouble shooting performance problems are : SET SHOWPLANON

 

86.  What would you do when the ASE server’s performance is bad?

“Bad performance” is not a very meaningful term, so you’ll need to get a more objective diagnosis first.  Find out (i) what such a complaint is based on (clearly increasing response time or just a “feeling” that it’s slower?).  (ii) for which applications / queries / users this seems to be happening, and (iii) whether it happens continuously or just incidentally.  Without identifying the specific, reproducible problem, any action is no better than speculation.

 

87.   What you do when a segment gets full?

Wrong:  a segment can never get full (even though some error messages state something to that extent).  A segment is a “label” for one or more database device fragments; the fragments to which that label has been mapped can get full, but the segments themselves cannot. (Well, Ok, this is a bit of trick question… when those device fragments full up, you either add more space, or clean up old / redundant data.)

 

88.  Is it a good idea to use data rows locking for all tables by default?

Not by default, only if you’re having concurrency (locking) problems on a table, and you’re not locking many rows of a table in a single transaction, then you could consider datarows locking for that table.   In all other cases, use either data pages or all pages locking.

(data pages locking as the default lock scheme for all tables because switching to datarows locking is fast and easy, whereas for all pages locking, the entire table has to be converted  which may take long for large tables.  Also, datapages locking has other advantages over all pages, such as not locking index pages, update statistics running at level 0, and the availability of the reorg command)

 

89.   Is there any advantage in using 64-bit version of ASE instead of the 32-bit version?

The only difference is that the 64-bit version of ASE can handle a larger data cache than the 32-bit version,  so you’d optimize on physical I/O.  Therefore, this may be an advantage if the amount of data cache is currently a bottleneck.  There’s no pint in using 64-bit ASE with the same amount of “total memory” as for the 32-bit version, because 64-bit ASE comes with an additional overhead in memory usage – so that net amount of data cache would actually be less for 64-bit than 32-bit in this case.

 

90. What is difference between managing permissions through users and groups or through user-defined roles?

 

The main difference is that user-defined roles (introduced in ASE 11.5) are server-wide and are grated to logins.  Users and groups (the classic method that has always been there since the first version of Sybase) are limited to a single database.  Permission can be grated / revoked to both user-defined roles and users / groups.  Whichever method you choose, don’t mix ‘m, as the precedence rules are complicated.

 

91.  How do you BCP only a certain set of rows out of a large table?

 

If you’re in ASE 11.5 or later, create a view for those rows and BCP out from the view.  In earlier ASE versions, you’ll have to select those rows into a separate table first and BCP out from that table.  In both cases, the speed of copying the data depends on whether there is a suitable index for retrieving the rows.

 

92. What are the main advantages and disadvantages of using identity columns?

The main advantage of an identity column is that it can generate unique, sequential numbers very efficiently, requiring only a minimal amount of I/O.  The disadvantage is that the generated values themselves are not transactional, and that the identity values may jump enormously when the server is shutdown the rough way (resulting in “identity gaps”).  You should therefore only use identity columns in applications if you’ve addressed these issues (go here for more information about identity gaps).

 

93. Is there any disadvantage of splitting up your application data into a number of different databases?

 

When there are relations between tables / objects across the different databases, then there is a disadvantage indeed: if you would restore a dump of one of the databases, those relations may not be consistent anymore.  This means that you should always back up a consistent set of databases is the unit of backup / restore.  Therefore, when making this kind of design decision, backup/restore issues should be considered (and the DBA should be consulted).

 

94.How do u tell the data time of server started?

select “Server Start Time” = crdate from master..sydatabases where name = “tempdb”  or

select * from sysengines

 

95. How do your move tempdb off of the master device?

This is Sybase TS method of removing most activity from the master device :

Alter tempdb on another device:

 

alter database tempdb on  …

go

drop the segments

3>           sp_dropsegment “default”, tempdb, master

4>           go

5>           sp_dropsegment “logsement”,tempdb,master

6>           go

7>           sp_dropsegment “system”, tempdb, master

8>           go

 

 

 

96.  We have lost the sa password, what can we do?

Most people use the ‘sa’ account all of the time, which is fine if there is only ever one dba administering the sytem.  If you have more than one person accessing the server using the ‘sa’ account, consider using sa_role enabled accounts and disabling the ‘sa’ account.  Funnily enough, this is obviously what Sybase think because it is one of the questions in the certification exams.

 

If you see that some is logged using the ‘sa’ account or is using an account with ‘sa_role’ enabled, then you can do the following:

 

sp_configure “allow updates to system tables”,1

go

 

update syslogins set password =null where name = ‘sa’

go

 

sp_password null,newPassword

go

 

97.  What are the 4 isolation levels, which was the default one?

·         Level 0        –           read uncommitted/ dirty reads

·         Level 1       –           read committed – default.

·         Level 2        –           repeatable read

·         Level 3        –           serializable

 

 

98.  Describe differences between chained mode and unchained mode?

Chained mode is ANSI-89 complaint, where as unchained mode is not.

In chained mode the server executes an implicit begin tran, where as in unchained mode an explicit begin tran is required.

 

99.  dump transaction with standby_access is used to?

provide a transaction log dump with no active transactions

 

100.  Which optimizer statistics are maintained dynamically?

Page counts and row counts.

 

Morgan Stanley, Telephonic round Sybase Interview Questions

Guys, I have collected some Sybase interview questions from the folks who attended Morgan Stanley, Mumbai interview recently. Please try to post correct answers so that everyone benefits from this.

 

1.Explain Performance tuning issues you recently worked on?

2.How to check the query plan and how to get the query plan without executing the query?

3. Diff .Clustered and non-clustered and when to create them. Number of clustered / non – clustered indexes that can be created on a specific table?

4.Types of locks in sybase, Is shared on shared lock, shared on exclusive, exclusive on exclusive lock possible?

5.How to identify which process created a dead lock situation?

6.What is the default isolation level in sybase and what is the purpose of using isolation levels?

7.Which performs well a join or subquery? from memory perspective?

8.Discussion on a query having “not in” clause w.r.t performance tuning?

9.What performs well “not in” or “not exists”?

10.How many parameters can a Stored Procedure return?

11.Can you BCP out a table having 10 million rows or more?

12.Difference between truncate and delete?

13.What is the purpose of “with check” option in views?

14.If the table doesn’t have an index, will Sybase allow to create a updatable view on it?

15.Multitable views- how update works?

 

2. Set noexec on and set showplan on

 

5. Print deadlock information to sybase log but this can degrade sybase performance.

 

6. Default isolation level is 1. isolation levels specifies the kinds of interactions that are not permitted while concurrent transactions are executing—that is, whether transactions are isolated from each other, or if they can read or update information in use by another transaction. Sybase supports 4 isolation levels level 0 (read uncommitted), level 1( read committed), level 2(repeatable read) and level 3( serializable read)

 

11. max file size limit gets exceeded if 10 million or more rows are bcp’d out. To avoid that we can use -F and -L options of bcp utility to take bcp out to multiple files

 

13. with check option restricts the rows that can be updated or inserted on the where clause

 

create view vw_ca_authors as

select au_id, au_lname, au_fname,   phone, state

from authors

where state = “CA”

with check option

 

– can only insert or update to “CA” so this insert fails

insert vw_ca_authors values (“111-222-3333”, “Smith”, “John”, “453-2343”, “NY”)

 

 

15. Only one table can be updated at a time and view has no “with check” option

7. If memory is ample then Joins are preferrable. Join has a better performance over sub-queries as subqueries involves creation of intermediate tables and more I/O.

RBS Sybase Interview questions

1 ) Using the below mentioned query,You can find the duplicate values:

select column from table group by column having count(column) > 1

2) We can use the sp_lock and sp_familylock to see the locks are avaliable in database.

3) select *from sysprocesses ( Here we can the cpu utilization,Engine number and Blocked processes) or sp_who

 

4) If you want to improve the performance of this query ,we have to create index on the same.

5) We can analyze the table using the query plan of that table (sp_showplan)

6) Need to Check

7) Truncate : It will truncate the data of the table.

Drop : It will remove the data and syntax as well.

 

Hope it will help for you. Pls ignore If i am wrong.

 

————————————————————————-

SYBASE INTERVIEW QUESTIONS – Accenture

————————————————————————-

 

1. What databases are created in Sybase by default when installed?

   – master, tempdb, model

 

   model –> template to provide attributes to various databases. Need an example??

 

2. What types of temp tables are created?

   – #abc, tempdb..abc

   – # tables life time is within the sp it is created or until session is open;

   when the session is closed all the # tables will be dropped.

 

3. When will tempdb..xxx tables be dropped?

   – When the sybase server is bounced.

 

4. What happens exactly when the sybase server is bounced? How are the tempdb.. tables dropped?

   – ??? The tempdb database is recreated from model database.

 

5. Have you used ‘user defined datatypes’? What are they?

   – ???  sp_addtype tid, “char(6)”, “not null”

 

 

6. What are the advantages of sp?

   – Fast -> why?

            a. Network traffic.

            b. Faster execution. Becuase it is alredy compiled/

            c. Query plan can exist in procedure cache.

           

 

7. When will the query plan of a sp be created?

   – compile time or when executed first time.

 

8. What exactly happens when the sp is created?

   – ???

 

9. Where is the sp stored when created?

   – syscomments –> what else are stored in syscomments?

 

Performance Tuning:

——————–

10. How will you start performance tuning?

 

 

11. How sybase will decide which index to use?

    – based on the statistics – stored in sysstatistics.

 

12. What are ‘deffered update’ and ‘direct update’?

    – In a update statement when the table that is getting updated is joined, then inorder not to join the updated data again

    and goes on in infinite join, Sybase deffers the update to table until all rows are scanned. I think it stored the intermediary

    result in a work table.

    – Direct update is something which is updated realtime.

 

13. How to get query plan? How to get the query plan if I don’t want to execute the query?

    – SET SHOWPLAN ON

    – SET NOEXEC ON

    – SET FMTONLY ON

 

14. How error handling is done in stored procedure?

    – @@error variable not equal to zero when there is a error in the just executed SQL.

 

15. How will you pass the error message from stored procedure to the application program?

    – ??? may be db interface

 

16. What are different modes of transaction?

    – Chained mode and unchained mode.

 

       default is unchained. “set chained on” to set mode of transction.

           

     http://manuals.sybase.com/onlinebooks/group-as/asg1250e/sqlug/@Generic__BookTextView/53713;pt=52735/*

 

 

17. How does sybase internally manages a transaction?

    – @@trancount, transaction log

 

18. In a nested transaction, if you issue a rollback at the end all transactions are rolled back. How does sybase do this?

    – ???

 

19. What are different locking schemes in Sybase?

 

            Allpages locking, which locks datapages and index pages

 

            Datapages locking, which locks only the data pages

 

            Datarows locking, which locks only the data rows

            http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc20021_1251/html/locking/X25549.htm

 

    – ???

 

20. How do you define what lock to be applied when defining a table?

    – CREATE TABLE abc (c1 int, c2 int) lock <datarows/datapages/allpages>

            create table table_name (column_name_list)

      [lock {datarows | datapages | allpages}]

 

 

21. What is the difference between Row level, Page level, Table level locks? Which is preferred?

    – ???

 

22. What is the default locking scheme is Sybase? Why Sybase decide to use this?

    – ??? page level — Allpages locking is the default locking scheme

 

23. How update lock works?

    – ??? Sybase expects a row in page then creates a page lock till finds the actual row, then creates a exclusive lock.

 

24. Which lock should be used? Which is faster (or something like that he asked)?

    – ??? For wide data range selection or update it is page level lock, else row level locking.

 

24. What are different types of joins?

    – Simple join, self join, outer join

 

25. If monitoring tool is not installed how will you indentify the slow sql in a application?

    – what are the sys tables that can help us???

 

26. How do you quickly provide a solution for a performance issue?

    – AQP?

 

27. How will you apply AQP to a query within a stored procedure?

    – ??

 

28. What are the tools available in Sybase for performance tuning?

    – Force plan, index covering, …?

 

29. What are indexs and types? Diff between clustered and non-clustered index?

            http://www.sybaseteam.com/showthread.php?tid=405

    -

 

30. What are disadvantages of clustered index?

    – Table need to sorted if the table is inserted/amended/deleted often.

 

31. In Stored Procedure, what is the use of with recompile option?

    – Every time the sp is executed a new set of query plan is created. Used when data in the tables of

    sp change drastically/dynamically.

 

32. What is sp_recompile? When will you use this?

    – ??? Causes each stored procedure and trigger that uses the named table to be recompiled the next time it runs.

            sp_recompile objname

 

 

33. What are the advantages of views?

    – abstraction; not all data of the same table can be shown to the user.

    – He was asking one basic advantage you missed – what is that?

            http://sqlserverpedia.com/wiki/Views_-_Advantages_and_Disadvantages

 

34. What is with check option on views?

    – So that the column updated is visible for the view.

 

35. When a new column is added to the table and there is a view with that table say ‘select * from table’,

    when you execute the view will it include the new column?

    – No, because the ‘select *’ would internally get expanded to individual columns and hence view will not know about the new column.

 

36. When a user manually update a column, say flag, in the table (there may be many other columns), then it should be validated?

    How will you do this?

    – Trigger

            create trigger flag_trig

            on emp

            for update

            as

            begin

 

            if exists (select 1 from deleted D, inserted I where D.flag != I.flag)

            BEGIN

 

            SQL Statements ….  /* Some action */

 

            END

 

 

37. How does a update trigger work?

    – magic tables deleted and inserted.

 

38. What are different BCP types? What are the options available?

    – ??? Fast Bcp (removing triggers, indexes on table and then bcp)  and slow bcp

 

39. What is a batch option in BCP? When -b option is not given and when you bcp in 4 million records what happens?

    – ??? Transaction log blows as bcp is logged operation. Long open transaction also creates problem.

 

40. What happens exactly when a BCP with batch option is done?

    – ???

 

41. What is the use of identity column? Can we give our own value? Do you know of identity gaps?

    – ???sequential entry by sybase, yes we can give our own value. some rows deleted in between.

 

42. UNION and UNION ALL? What is the difference which is faster?

    – Union – gets distince

            Union all – duplicates. This is faster.

 

43. What is correlated sub query? What happens exactly in a correlated sub query?

    -

 

44. In isql utility what is o and i option?

    – o to output the query result to a file.

    – i to execute set of sqls stored in a file.

 

45. How can I ignore duplicates while loading data through BCP?

    – Create a index with ignore_dup_key option.

 

46. What is the difference between a ‘User’ and a ‘Login’ in Sybase?

    – ??? login is authentication to server. user is for database.

 

47. What are the system tables have you seen so far?

    – sysobjects, syscolumns, sysindex, syscomments,sysqueryplans,sysdevices

 

48. How do you know all the processes in a SYBASE?

    – sysprocesses

 

——

UNIX

——

1. What is AWK and why do we need it?

    -

2. How to find a string in a file?

    – grep

3. What is SED?

    -

4. How do you know all the processes in a UNIX?

    – ps

 

 

What will happen when an SQL Statement submitted to ASE?

 

SQL statement is parsed by the language processor

SQL statement is normalized and optimized

SQL is executed

Task is put to sleep pending lock acquisition and logical or physical I/O

Task is put back on runnable queue when I/O returns

Task commits (writes commit record to transaction log)

Task is put to sleep pending log write

Task sends return status to client

 

Status Values Reported by sp_who

 

 

Status

Condition

Effects of kill Command

recv sleep

waiting on a network read

immediate

send sleep

waiting on a network send

immediate

alarm sleep

waiting on an alarm, such as waitfor delay “10:00″

immediate

lock sleep

waiting on a lock acquisition

immediate

sleeping

waiting disk I/O, or some other resource. Probably indicates a process that is running, but doing extensive disk I/O

killed when it “wakes up”, usually immediate; a few sleeping processes do not wake up, and require a Server reboot to clear

runnable

in the queue of runnable processes

immediate

running

actively running on one on the Server engines

immediate

infected

Server has detected serious error condition; extremely rare

kill command not recommended. Server reboot probably required to clear process

background

a process, such as a threshold procedure, run by SQL Server rather than by a user process

immediate; use kill with extreme care. Recommend a careful check of sysprocessesbefore killing a background process

log suspend

processes suspended by reaching the last-chance threshold on the log

killed when it “wakes up”: 1) when space is freed in the log by a dump transactioncommand or 2) when an SA uses the lct_adminfunction to wake up “log suspend” processes

 

 

Only a System Administrator can issue the kill command: permission to use it cannot be transferred.

T-SQL Query to get all the tables and lock scheme info.

The following Query gives list of all the User Tables and locking scheme of the Table.

select “Table”=left(name,32), “lock_scheme”= case 
                            when (sysstat2 & 57344) < 8193 then ‘APL’ 
                            when (sysstat2 & 57344) = 16384 then ‘DPL’ 
                            when (sysstat2 & 57344) = 32768 then ‘DRL’ 
                            end
 from sysobjects
  where type=’U’
        order by

Adaptive Server Enterprises :

Q1: Please let me know system db names, what is the purpose of sybsystemdb?

Q2: Suppose our tempdb is filling up or filled up, you cant recycle the db server, then what would be your steps?

Q3: Business Team(AD) is reporting the query slow performance, how will you investigate, pls consider all case.  (Hint: memory, stats, indexes,reorg,locks etc)

Q4: Suppose our temdb is not recovered ,can we create new database?

Q5:  We have configured 7 dataserver engines  for our PROD server(we have sufficient cpus), still we are facing the performance hit? Possible root cause?

Q6: Suppose we are doing the ASe 15 upgrade by dump & load , and in 12.5 server having 2000 logins. Since syslogins having different table structure in both enviorment, we cant use bcp,  how will we move these logins from 12.5 to 15.0?

Q7: Which feature of ASE15.o most impressed you and why?

Q8: What is your org’s backup policy, what is dump tran with standby_access?

Q9: What is log suicide ?

Q10: When we require log suicide of a DB?

Q11: What is the bypass recovery, when we require the bypass recovery?

Q12: What is the difference between shutdown and shutdown with no_wait, besides the immediate shutdown difference.

Q13: Suppose In our one database  huge trans are going on, we issued the shutdown with no_wait . Will it hit the server restart and how?

Q14: Whats the named data cache, what is buffer pooling and how the cache hit effects the system performance ?

Q15: We are getting stack traces for one of our databases? How will you investigate?

Q16: Is object level recovery possible in ASE?

Q17: What is the difference between sysstats and systabstats table?

Q18: What is histogram and what its default step value?

Q19: Why we requires non default step value in histogram ?

Q20: Can we run the update stat on one table one two step(half table in first time and after that  rest half of table)?

Interview Questions on User Management & Permissions

1. What is sybase security model for any user/login?

2. What is the diffrence between syslogins and sysusers?

3. How can we add the login in ase? What are the required parameter of sp_addlogin?

4. What are aliases?

5. Whats the diff between role and group and which one is better?

6. How can we sync the logins from prod to uat server, how many tables we need take care for the login sync?

7. Whats suid mismatch?

8. Why do we require aliases?

9. Whts the importance of sysrole table in each database?

10. Explain syslogins syssrvroles, sysloginroles and sysroles and whts the linkup among all?

11. What is proxy authorization?

12. During the refresh from PROD -> UAT env,tables which we require to take care?

13. Explain about sysprotect tabel and sp_helprotect sp?

14. Can we change the password of other login, if yes, how?

15. What is the role required for user management?

16. diffrence b/w 12.5 syslogins and 15.5 syslogins?

17. What is guest user in database and why we require guest user?

18. What is the keycustodian_role in ASE 15.5?

19. How can we include the passwordpolicy? explain sp_passwordpolicy?

20. Can we include password history feature? From which version it is avilable and how can we do that?

21. Can we include one sql proc which exceute during login and how can we do that?

New Ques on 21st Feb 2011

1. How can we get the compression level information from the dump files?

2. What is the difference between update and exclusive locks?

3. What is isolation level in ASE? And default value of isolation level.?

4. How can we avoid the deadlock in the database?

5. Is there any way to print the deadlock information in the errorlog?

6. Give the two benefits for creating the database using for load option?

7.What are new features of the Sybase 15? And let me know which you are using in your day to day operations?

8. What is the joining order in ASE ( suppose we have 4-5 tables with different  size)?

9. What difference between sysmon and MDA table ?

10 . Can we take the output of sybmon in a table?

——

 

Replication Server:

Q1:  How can we know, the current ASE and Replication Server Setup is  warm standby setup or not?

Q2: What is the function of SQM and SQT?

Q3: What is the 1TP & 2TP?

Q4: In how many ways we can know the tran details which is causing the thread down?

Q5 : Pls explain the functionality of rep server starting from PDB logs to RDB

Q6: What is the diff between DSI and DSI EXEC thread?

Q7: Can we dump the queues?

Q8: Suppose our queues are filling up, in next 2 hrs 100% would be fill, how will you investigate and steps for troubleshooting?

Q9: How can we know RSSD server name from replication Server?

Q10: What is the importance of materialize & de-materialize queue?

Q11: What is DIST thread of Replication server?

Q12: What is the difference between connection and route?

Q13: What is the purpose of ID server in replication setup?

Q14: What is switch active ?

New Questions:

What is the diffrence between sp_setreplicate and sp_setreptable?

What is the diffrence between route and connections?

How can we check the current replication setup whether it is WS , table level or db level?

What would be  the impact of long running tran running in PDB in whole replication setup?

suppose there is temp table in sp and we want to replicate it?

What is the importance of rs_locator table in replication server?

What is dbcc settruc ltm, valid/ignore? When we use this dbcc command?

What is diffrence between rs_zeroltm and dbcc settrucn ltm,valid?

What are the diffrent users in common replication setup?

What is rs_subcmp?

New Question on 21st Feb

1. What are the routes?

2. How routes can enhance the performance?

3. What is function string?

4. Replication queues are filling up, Where we need to look into for root cause?

5. If DSI is down , how can we make it up? Whats rs_exception?

6. In an table level replication setup, we need to alter a coloum, what would be the step for the same?

7. Suppose there is size mismatch between table data and replication def between cols? What will happen?

8. How can we refresh a database in the replication enviorment?

9. What factor affecting the replication agent performance in primary database?

10. How can we do the master database replication? Is it possible? What information we can replicate?

 

New Questions on 11th march 2011
=============================
What is Identity Colum?
What is the advantage and disadvantage of Identity coloums?
From performanace point of view ,which is better if exists or if not exists?
How can we avoid fragmentation in table?
There is update statement on one APL and one DOL table. Which one would be fatser?Consider the cases: where clause on index cluster index coloum , other case not using any index.
Why the reorg is faster on DOL table as compare cluster index rebuild on APL?
Wht cluster index with sorted_data on APL is faster than reorg rebuild in DOL?
What is Sybase recommendation for tempdb size, suppose we have 300GB , 150GB dbs are inserver, wht would be the sybase recommendation for sizing of tempdb?
Whats the difference between dsysnc and direct io?
Suppose we are not concerning about the recovery of the database, which would be better for performance dsync(on/off) or direct io and why?
Whats the asynchronus prefetch ? How is it helping in performance enhance?
We having a 4k page size server, what can be possible pool size in the server?
As Sybase recommends 4K size pool for log usage in 2k page size server , please let me know the pool recommendtaion for 4K pagesize server?
How can we reduce the spinlock without partioning the data cache?
Can we have the spinlock contention with single engine?
In sysmon report what are the five segment you will be looking for performance?
Whta is meta data cache?
Whta is the archive database?
How can we enable the acrhive database for compresssed backup?
Hows the object level recovery is possible in ASE?
How can we find the culprit spid which has filled up th etempdb database?
How can we find the culprit spid which is badly used the log segment of tempdb?
Whats partioning? How partioning helping in increaeing the performance?
Suppose a table is partioned based on a coloum, how dataserver will be handle the insert on the table?
Apart from the query plans, wht else resides in proc cache?
What is new config param “optimization goal”? Whats the parameter we need to provideit?
User is experiancing very slow performace, what can be the reason for this slowness?
What is engine affinity and how can set the engine affinity?
If there are 3 cpus in the box, how many engine we can configure ?
Suppose dataserver is running very slow and sp_monitor is showing 100% cpu usages, what can be possible issue? Where will you look at?
What is the error classes in replication server?
What is the diffrence between Warm standby and table level replication?
Can you please let me know five case when the thread goes down in replication?
What are triggers? What are type of triggers and how many triggers can we configure on a table?
What are diffrecnt locking scheme in ASE and what are the latches?
How can we dump a replication queue

How to Perform SQL Server Row-by-Row Operations Without Cursors

BY DAVIDVANDESOMPELE

SQL cursors have been a curse to database programming for many years because of their poor performance. On the other hand, they are extremely useful because of their flexibility in allowing very detailed data manipulations at the row level. Using cursors against SQL Server tables can often be avoided by employing other methods, such as using derived tables, set-based queries, and temp tables. A discussion of all these methods is beyond the scope of this article, and there are already many well-written articles discussing these techniques.

The focus of this article is directed at using non-cursor-based techniques for situations in which row-by-row operations are the only, or the best method available, to solve a problem. Here, I will demonstrate a few programming methods that provide a majority of the cursor’s flexibility, but without the dramatic performance hit. 

Let’s begin by reviewing a simple cursor procedure that loops through a table. Then we’ll examine a non-cursor procedure that performs the same task.

if exists (select * from sysobjects where name = N’prcCursorExample’)

   drop procedure prcCursorExample

go

CREATE PROCEDURE  prcCursorExample

AS

/*

**  Cursor method to cycle through the Customer table and get Customer Info for each iRowId.

**

** Revision History:

** —————————————————

**  Date       Name       Description      Project

** —————————————————

**  08/12/03   DVDS       Create            —-     

**

*/

SET NOCOUNT ON

– declare all variables!

DECLARE  @iRowId  int,

         @vchCustomerName     nvarchar(255),

         @vchCustomerNmbr     nvarchar(10)    

– declare the cursor

DECLARE Customer CURSOR FOR

SELECT    iRowId,

          vchCustomerNmbr,

          vchCustomerName

FROM      CustomerTable

OPEN Customer

FETCH Customer INTO @iRowId,
                    @vchCustomerNmbr,
                    @vchCustomerName

– start the main processing loop.

WHILE @@Fetch_Status = 0

   BEGIN

   – This is where you perform your detailed row-by-row

   – processing.

   – Get the next row.

   FETCH Customer INTO @iRowId,
                       @vchCustomerNmbr,
                       @vchCustomerName              

   END

CLOSE Customer

DEALLOCATE Customer

RETURN

 

BY DAVIDVANDESOMPELE

As you can see, this is a very straight-forward cursor procedure that loops through a table called CustomerTable and retrieves iRowId, vchCustomerNmbr and vchCustomerName for every row. Now we will examine a non-cursor version that does the exact same thing:

if exists (select * from sysobjects where name = N’prcLoopExample’)

   drop procedure prcLoopExample

go

CREATE PROCEDURE  prcLoopExample

AS

/*

**  Non-cursor method to cycle through the Customer table and get Customer Info for each iRowId.

**

** Revision History:

** ——————————————————

**  Date       Name      Description           Project

** ——————————————————

**  08/12/03   DVDS      Create                —–

**

*/

SET NOCOUNT ON

– declare all variables!

DECLARE     @iReturnCode       int,

            @iNextRowId              int,

            @iCurrentRowId         int,

            @iLoopControl      int,

            @vchCustomerName   nvarchar(255),

            @vchCustomerNmbr   nvarchar(10)

            @chProductNumber   nchar(30)

           

– Initialize variables!

SELECT @iLoopControl = 1

SELECT @iNextRowId = MIN(iRowId)

FROM   CustomerTable

– Make sure the table has data.

IF ISNULL(@iNextRowId,0) = 0

   BEGIN

            SELECT ‘No data in found in table!’

            RETURN

   END

– Retrieve the first row

SELECT           @iCurrentRowId   = iRowId,

                 @vchCustomerNmbr = vchCustomerNmbr,

                @vchCustomerName = vchCustomerName

FROM             CustomerTable

WHERE            iRowId = @iNextRowId

– start the main processing loop.

WHILE @iLoopControl = 1

   BEGIN

     – This is where you perform your detailed row-by-row

     – processing.    

     — Reset looping variables.           

            SELECT   @iNextRowId = NULL           

            — get the next iRowId

            SELECT   @iNextRowId = MIN(iRowId)

            FROM     CustomerTable

            WHERE    iRowId > @iCurrentRowId

            — did we get a valid next row id?

            IF ISNULL(@iNextRowId,0) = 0

              BEGIN

                        BREAK

              END

            — get the next row.

            SELECT  @iCurrentRowId =   iRowId,

                    @vchCustomerNmbr = vchCustomerNmbr,

                    @vchCustomerName = vchCustomerName

            FROM    CustomerTable

            WHERE   iRowId = @iNextRowId           

   END

RETURN

 

There are several things to note about the above procedure.

For performance reasons, you will generally want to use a column like “iRowId” as your basis for looping and row retrieval. It should be an auto-incrementing integer data type, along with being the primary key column with a clustered index.

There may be times in which the column containing the primary key and/or clustered index is not the appropriate choice for looping and row retrieval. For example, the primary key and/or clustered index may have already been built on a column using uniqueindentifier as the data type. In such a case, you can usually add an auto-increment integer data column to the table and build a unique index or constraint on it.

The MIN function is used in conjunction with greater than “>” to retrieve the next available iRowId. You could also use the MAX function in conjunction with less than “<” to achieve the same result:

SELECT    @iNextRowId = MAX(iRowId)

FROM      CustomerTable

WHERE     iRowId < @iCurrentRowId

Be sure to reset your looping variable(s) to NULL before retrieving the next @iNextRowId value. This is critical because the SELECT statement used to get the next iRowId will not set the @iNextRowId variable to NULL when it reaches the end of the table. Instead, it will fail to return any new values and @iNextRowId will keep the last valid, non-NULL, value it received, throwing your procedure into an endless loop.  This brings us to the next point, exiting the loop.

When @iNextRowId is NULL, meaning the loop has reached the end of the table, you can use the BREAK command to exit the WHILE loop.  There are other ways of exiting from a WHILE loop, but the BREAK command is sufficient for this example.

You will notice that in both procedures I have included the comments listed below in order to illustrate the area in which you would perform your detailed, row-level processing.

– This is where you perform your detailed row-by-row

– processing.    

Quite obviously, your row level processing will vary greatly, depending upon what you need to accomplish. This variance will have the most profound impact on performance.

For example, suppose you have a more complex task which requires a nested loop. This is equivalent to using nested cursors; the inner cursor, being dependent upon values retrieved from the outer one, is declared, opened, closed and deallocated for every row in the outer cursor. (Please reference the DECLARE CURSOR section in SQL Server Books Online for an example of this.)  In such a case, you will achieve much better performance by using the non-cursor looping method because SQL is not burdened by the cursor activity

Here is an example procedure with a nested loop and no cursors:

if exists (select * from sysobjects where name = N’prcNestedLoopExample’)

   drop procedure prcNestedLoopExample

go

CREATE PROCEDURE  prcNestedLoopExample

AS

/*

**  Non-cursor method to cycle through the Customer table **  and get Customer Name for each iCustId. Get all
**  products for each iCustid.

**

** Revision History:

** —————————————————–

**  Date        Name       Description       Project

** —————————————————–

**  08/12/03    DVDS       Create            —–

**

*/

SET NOCOUNT ON

– declare all variables!

DECLARE     @iReturnCode                int,

            @iNextCustRowId       int,

            @iCurrentCustRowId    int,

            @iCustLoopControl     int,

            @iNextProdRowId       int,

            @iCurrentProdRowId    int,

            @vchCustomerName      nvarchar(255),

            @chProductNumber      nchar(30),

            @vchProductName       nvarchar(255)

           

– Initialize variables!

SELECT     @iCustLoopControl = 1

SELECT     @iNextCustRowId = MIN(iCustId)

FROM  Customer

– Make sure the table has data.

IF ISNULL(@iNextCustRowId,0) = 0

   BEGIN

            SELECT ‘No data in found in table!’

            RETURN

   END

– Retrieve the first row

SELECT      @iCurrentCustRowId = iCustId,

            @vchCustomerName = vchCustomerName

FROM  Customer

WHERE       iCustId = @iNextCustRowId

– Start the main processing loop.

WHILE @iCustLoopControl = 1

   BEGIN

        – Begin the nested(inner) loop.

        – Get the first product id for current customer.

            SELECT  @iNextProdRowId = MIN(iProductId)

            FROM    CustomerProduct

            WHERE   iCustId = @iCurrentCustRowId

           

         — Make sure the product table has data for 
         — current customer.

         IF ISNULL(@iNextProdRowId,0) = 0

            BEGIN

              SELECT ‘No products found for this customer.’

            END

         ELSE

            BEGIN

           – retrieve the first full product row for 
           — current customer.

              SELECT  @iCurrentProdRowId = iProductId,

                       @chProductNumber = chProductNumber,

                       @vchProductName = vchProductName

              FROM  CustomerProduct

              WHERE           iProductId = @iNextProdRowId

            END

            WHILE ISNULL(@iNextProdRowId,0) <> 0

           BEGIN

        — Do the inner loop row-level processing here.

        – Reset the product next row id.

                SELECT  @iNextProdRowId = NULL

                                   

       – Get the next Product id for the current customer

                SELECT @iNextProdRowId = MIN(iProductId)

                FROM  CustomerProduct

                WHERE iCustId = @iCurrentCustRowId

                AND  iProductId > @iCurrentProdRowId

     – Get the next full product row for current customer.

             SELECT     @iCurrentProdRowId = iProductId,

                        @chProductNumber = chProductNumber,

                            @vchProductName = vchProductName

             FROM  CustomerProduct

             WHERE           iProductId = @iNextProdRowId

           END

      – Reset inner loop variables.

            SELECT       @chProductNumber = NULL

            SELECT       @vchProductName = NULL

            SELECT       @iCurrentProdRowId = NULL

       — Reset outer looping variables.  

            SELECT           @iNextCustRowId = NULL

           

       – Get the next iRowId.

            SELECT           @iNextCustRowId = MIN(iCustId)

            FROM  Customer

            WHERE           iCustId > @iCurrentCustRowId

       – Did we get a valid next row id?

            IF ISNULL(@iNextCustRowId,0) = 0

              BEGIN

                        BREAK

              END

        – Get the next row.

            SELECT    @iCurrentCustRowId = iCustId,

                      @vchCustomerName = vchCustomerName

            FROM  Customer

            WHERE     iCustId = @iNextCustRowId

           

   END

RETURN

In the above example we are looping through a customer table and, for each customer id, we are then looping through a customer product table, retrieving all existing product records for that customer. Notice that a different technique is used to exit from the inner loop. Instead of using a BREAK statement, the WHILE loop depends directly on the value of @iNextProdRowId. When it becomes NULL, having no value, the WHILE loop ends.

Conclusion

SQL Cursors are very useful and powerful because they offer a high degree of row-level data manipulation, but this power comes at a price: negative performance. In this article I have demonstrated an alternative that offers much of the cursor’s flexibility, but without the negative impact to performance. I have used this alternative looping method several times in my professional career to the benefit of cutting many hours of processing time on production SQL Servers.

In both of these cases the command
SET SHOWPLAN ON
is your greatest ally!

I also like to run
SET NOEXEC ON
so that the server doesn’t execute the query. Usefull when you want to benchmark UPDATE or DELETE commands and not accidentally change any data! 

(Remember to run SET NOEXEC ON last because if you run it first the SET SHOWPLAN ON statement will, of course, not be run!)
1> SET SHOWPLAN ON

2> SET NOEXEC ON

3> GO

 

1> SELECT *

2>   FROM post,

3>        users     

4>  WHERE post.userid = users.userid

5> GO

 

QUERY PLAN FOR STATEMENT 1 (at line 1).

 

 

    STEP 1

        The type of query is SELECT.

 

        FROM TABLE

            users

        Nested iteration.

        Table Scan.

        Forward scan.

        Positioning at start of table.

        Using I/O Size 2 Kbytes for data pages.

        With LRU Buffer Replacement Strategy for data pages.

 

        FROM TABLE

            post

        Nested iteration.

        Table Scan.

        Forward scan.

        Positioning at start of table.

        Using I/O Size 2 Kbytes for data pages.

        With LRU Buffer Replacement Strategy for data pages.

As you can see from the output we have a table scan on BOTH tables! YIKES! This will cause some problems as your tables start to fill up with information.

To fix this problem, create an index on users.userid like this:

1>        CREATE INDEX userid ON users( userid )

2>        GO

 

1> SELECT *

2>   FROM post,

3>        users     

4>  WHERE post.userid = users.userid

5> GO

 

QUERY PLAN FOR STATEMENT 1 (at line 1).

 

    STEP 1

        The type of query is SELECT.

 

        FROM TABLE

            post

        Nested iteration.

        Table Scan.

        Forward scan.

        Positioning at start of table.

        Using I/O Size 2 Kbytes for data pages.

        With LRU Buffer Replacement Strategy for data pages.

 

        FROM TABLE

            users

        Nested iteration.

        Index : userid

        Forward scan.

        Positioning by key.

        Keys are:

            userid  ASC

        Using I/O Size 2 Kbytes for data pages.

        With LRU Buffer Replacement Strategy for data pages.

As you can see, the users table is now using the index your created. The reason why post is a table scan is because you are selecting all rows so an index won’t help you at all. A more complex WHERE clause which uses more columns from post would require an index to avoid the table scan.

To turn off NOEXEC And SHOWPLAN simply reverse the first command:

1> SET NOEXEC OFF

2> SET SHOWPLAN OFF

3> GO

 

QUERY PLAN FOR STATEMENT 1 (at line 1).

 

 

    STEP 1

        The type of query is SET OPTION OFF.

 

 

QUERY PLAN FOR STATEMENT 2 (at line 2).

 

 

    STEP 1

        The type of query is SET OPTION OFF.

 

1>

To enable stored procedure showplans:

Code:

DBCC TRACEON( 3604, 302 )

SET SHOWPLAN ON

SET FMTONLY ON

GO

 

EXEC sp_something

GO

To check what exactly is executed at the server level when frontend user kicks of a report or any application module, use:

dbcc traceon(11201,11202,11203,11204,11205,11206)

It produces huge output in errorlog. Make sure to turn it off when the job is done.

How to separate data and log segment

  1. use disk init to create the new log device for your database

    2. dump tran with truncate_only- to make sure we clear the log

    3. use sp_logdevice to move the log to the new device
    – sp_logdevice , 
    – this will change the sgmaps of 7 to 3 and move the log to the new device which has a segmap of 4

    4. dump tran with truncate_only- to make sure we clear any log that might remain on the data device

    5. use sp_helplog to make sure that the log starts on the log device

TRANSACTION LOG MANAGEMENT SYBASE

Description:A transaction log (also database log or binary log) is a history of actions executed by a database management system to guarantee ACID properties over crashes or hardware failures. Physically, a log is a file of updates done to the database, stored in stable storage.

If, after a start, the database is found in an inconsistent state or shut down improperly, the database management system reviews the database logs for uncommitted transactions and rolls back the changes made by these transactions. Additionally, all transactions that are already committed but whose changes were not yet materialized in the database are re-applied. Both are done to ensure atomicity and durability of transactions.
Sybase ASE provides the human-readable logs that are ready by the server on the time of its automatic recovery and the appropriate actions would be taken to commit or rollback the transactions.
During transactions, any changes in the data record in tables would first be written in the syslogs systems table and then it updates into the database tables. This log is known as transaction log. Syslogs table is available in every database and it keeps track on each database independently.
There are different ways that will allow us to backup the transaction logs and can be used it for restoring the data at the time of device, Database or system crashes.
Dependencies:

1. Target system OS user that will own the software (sybase).
2. Target system OS group (sybase).
3. User should have the access to ASE with SA login
4. Make sure the dbccdb is installed in the ASE.
5. SA or equivalent role should be use to run the DBCC commands.
6. Make sure that the backup server is up and running fine.
7. Checkpoint execution configuration needs to be changed as per the requirement.

Steps:

1. Check and ensure whether the inputted Data server running and exists, If so log on the ASE data server. You should use the SA role or equivalent role for it. The role can be checked by select show_role().

2. Check if there is dbccdb is installed in the ASE server. If the dbccdb is not installed in ASE, then fail the step (abort) with return message “The dbccdb is not installed, hence cannot run the database consistency check”.

select name from master..sysdatabases

3. Checking and fixing the database consistency before dumping the database/transaction log. Make sure that the configuration for running the dbcc checkstorage command is already been done, If not do the needful.

>isql -U{user_name} –P{password} –S{servername}
>use master
>go
>dbcc checkstorage (db_name)
>go

4. Check and set the database options required in order to take the transaction/database dump efficiently. Use the specified user defined database for the below flag checks.

sp_dboption “trunc log on chkpt” false
sp_dboption “select into/bulkcopy/pllsort”, false
sp_dboption “abort tran on log full”, false

5. Need to check for the specific device supposed to be using for the backup transaction/database dump. Check the device name in the list that would populate by the below query with the below query.
Use the query.

Select name, phyname from master..sysdevices where status=16

6. If there is no matched dump device found, abort the workflow and pop up the message “There is no specified dump device found in the server for backup”.If suppose you want to create the dump device and register then can do it using disk init command

Consider the db_dump_dev is the device already exists then, Register the device by the below command

sp_addumpdevice “disk”, “db_dump_dev”, ”name of the dumpfile”

7. Check for the space on the server df –h. If there is enough space on the disk device of the size specified to create the dump device; Create dump device for data and log.

#df –h {Path of the directory where the ASE is been installed}

8. Check and ensure that the backup server named provided should be UP and running fine. If it’s not found to be on, please start it with the RUN server script.

#ps –ef|grep backupserver| grep {servername}

9. Configure the backup server and connect it with the dataserver.

sp_addserver SYB_BACKUP, RPCServer, {name of backupserver}

10. Dump the database on the specific device which has already been configured using the command below, run the below command

>use master
>go
>dump database {db_name} to {db_dump_device}
>go

The above command includes database dump of the specified database. Remember dump could truncate the transaction log if the database options are not set properly which might cause of losing the data from the syslogs.

11. If we want to have the full incremental database dump, need to set the option which will

sp_dboption {db_name}, “no chkpt on recovery”, true

12. Perform the transaction log backup or dump the tranlog.This should only backup the insert, update and delete records in the database. The syslogs keep these records since the db option has been set for not to be truncate it on checkpoint.

>use db_name
>go
>dump tran my_db to {log_dump_dev}
>go

13. Check the size of the dump device, If its >0 that means the data has been dump for the transaction log.

GLOBAL GUIDELINE SYBASE INTERVIEW QUESTIONS

Tags

I. Fill in the blanks.

 

1. To append a string to an existing string ___________ command used.

2. ______________________   command used to  rename a table column.

3. _____________    to find out the table size in sybase.

4. ______________________ command is used  to start Sybase server  in single user mode.

5. Database Segmentation is useful for ________________

6. Default databases in Sybase after installation are ___________, ______________, ______________, ___________, ______________.

7. ____________ is used to shut down the Sybase engine forcibly.

8. Memory can be allotted to Sybase in multiple of _______________

9. _______ is the fill factor for any index page in Sybase.

10. To set the server level configurations ___________  command is used.

 

II . Describe briefly.

 

 1. What are the non-logged operations.

 2. What happens if the parameter data type of a stored procedure   

     doesn’t match the column type in the table?

 3. How can you capture the SQL being sent to a Server?

 4: What is the difference between “Fast” and “Slow” bcp?

 5. What are the standard commands to run to troubleshoot   

     stored procs/sql?

 6. Why is it better to code  SQL to use “if exists” instead of “if

     not exists?”

 7. What are the three locking schemes available Sybase ASE ?

 8. How do we can issue cross-server queries ?

 9..What is the skeletal code required for a cursor declaration and

     use in Sybase?

10.Write steps to eliminate duplicate rows from a table.

 

I  Fill in Blanks

 

1. STUFF

2. Alter table <tab name> modify

3. sp_spaceused

4. runserver -m

5. To Improve performance

6. Master, Model, Tempdb, Sybsystemprocs, Sybsystemdb

7. Shutdown with nowait

8. 2K Pages

9. 10 %

10. sp_configure

 

II.  Describe briefly

 

1.truncate table, fast bcp, writetext to text fields, select into, and parallel sorts.  You must have “select nto/bulkcopy”dboption set to true to do any of these. 

 

2.Table scan. If updating, automatic table lock. 

3.dbcc traceon(3604) ; dbcc sqltext(spid) dbcc pss(suid,spid)

 

4.”Fast”  : non-logged; database must have bcp/ select into set on, table must be w/o indexes OR triggers. –   “Slow”: with  index: logged, don’t need bcp/select into set on.

 

5.dbcc traceon(3604,302,310)

set showplan on

set noexec on

set statistics io on

 

6.Because “if exists” stops the select the first time it finds a row, whereas “if  not exists” must do a complete scan of the query before making the determination.

 

7.datarows , datapages , all pages

 

8.By creating  proxy tables locally to mirror remote tables that you may want to query.

 

9.declare cursor_name cursor for  select field1, field2 from table where condition1, condition2

 

declare @variable1, @variable2 (which must match the fields you select)

 

open cursor_name

fetch cursor_name into @variable1, @variable2

 

while (@@sqlstatus = 0)

begin

   perform actions

 

   fetch cursor_name into @variable1, @variable2

end

close cursor_name

deallocate cursor cursor_name

 

10.

- Create a duplicate table skeletal structure of your target table, with a unique constraint on the columns in question.  Select/insert into the new table, letting individual rows error out.  Your cloned table should now contain only unique records.

 

- Insert all duplicate rows to temporary table 1. 

- Then take a distincton those duplicate rows and insert into temp table2  

- Delete from target table where rows match temp table 2. Then re-insert temp table 2’s data into target.  This leaves just one copy of each previously duplicate row in the target table.

 

SYBASE ADMINISTRATION

 

The first key to administration is to define and enforce standards.

The next key is to be able to collect the distributed information and to be able to analyze it.

The third key to administration is to define procedures

The last key to administration is to be proactive, not passive

PERFORMANCE TUNING

Hardware Tuning

Generally tuning systems for optimal performance is a simple matter. If you spend more money, you get more performance. There are, however, methods of cost effectively tuning for performance. In order of importance to overall system throughput, I would:

  1. Buy many small disk drives instead of a few large disks. This will minimize IO contention.
  2. Buy extra disk controllers to eliminate yet another IO bottleneck.
  3. Buy additional RAM. Theoretically, under System 11 one can cache the entire database if one purchases enough RAM.
  4. Buy a faster CPU. The problem here is that most vendors do NOT sell cheap chip/motherboard upgrades, and purchasing a faster CPU often involves swapping out the old system.
  5. Buy Raid Disks – This has a big impact. See the section on RAID.

 

Sybase Tuning

The first trick to tuning your system is to identify the essential characteristics of that system and to identify potential bottlenecks. Consider if your system is read or write intensive. You can often add indexing to speed things up on read intensive tables. The same is NOT true for write intensive tables. The next thing to do is to analyze your system to decide where to tune. Tuning an IO subsystem is only useful if your system is IO limited. It is best to decide which

 

Spend some time optimizing disk device layout. It is an easy way to increase your performance.

 

Avoid cursor processing unless necessary. It is generally faster to process batches than row by row because that is how the system is tuned. Loops are especially slow.

 

Avoid long indexes or un-indexed tables that have more than a few rows. If there are any null fields in indexes add 5 characters to the index length plus one per field that allows nulls. Generally null fields in an index are not desirable due to data distribution issues.

 

Make sure your statistics are updated with information that reflects peak business usage. If you have a table that starts each day with 0 rows and ends each day with 2000 rows, the statistics should be updated somewhere between 1000 and 2000 rows.

 

Tables that are constantly updated or inserted into can be come very fragmented. Periodically rebuilding indexes to these tables can save space and increase performance. This is very important when the index’s first field is either a date-time or an identity field (ie. you are always adding to the end of the table)

 

Make sure that “transactions” do not linger in the database longer than is necessary. Any application using Sybase transactions should perform as fast as possible. Transactions hold locks, which can block other users and slow the system dramatically.

 

Triggers should only be used in rare circumstances (specific RI checks, auditing). It is preferable to restrict access to the database to stored procedures instead of using triggers. Remember, triggers are NOT compiled, and are therefore slow. They are also counter intuitive (something happens when you are not looking) and make debugging a nightmare. Finally, triggers are by definition a transaction so long, complex, triggers can easily cause deadlocking. When deadlocking is a serious problem, the first thing I look at is trigger design.

Use showplan and statistics io as much as possible on slow queries. Look for table scans and deferred updates (tmp tables being used).

 

The optimizer can only handle 4 tables at a time (at least until under Pre system 10, but I do not think this has changed). This means that a 5 table join is performed as search for a good four table join, with the results being joined “somehow” to the remaining tables. The optimizer often will scan tables when you are over the 5 table limit and it is to the 5th table. As the optimizer will use a temporary table, it is preferable for you to explicitly create the temporary table.

 

 

ROUTINELY (every 5-15 minutes)

  • Ø Dump Transaction Logs
  • Ø Check for Server availability
  • Ø Check for Blocks and Slow Performance
  • Ø A daily backup should be performed on all servers.
  • Ø Label and Store Tape Media in a secure location.
  • Ø Check All Sybase Error Logs
  • Ø Check DBCC output from previous night
  • Ø Check Space on each disk
  • Ø Check mirroring status
  • Ø Check server for any security violations
  • Ø Check Availability / Performance Of Server (log in)
  • Ø Generate electronic copy of system / server configuration
  • Ø One backup tape for the previous week should be stored in a special location for auditing purposes.
  • Ø Check referential integrity of all tables in database
  • Ø Backup static tables using object level backup
  • Ø Update Statistics on static tables
  • Ø Rebuild any indexes that are sufficiently fragmented
  • Ø Notify any users who have not changed their passwords in a month that it is time
  • Ø Generate monthly report of Server Performance statistics and send to business user.
  • Ø Generate disk growth space report for each server
  • Ø Generate report on number of connections or other server activity on server
  • Ø Generate report of which users have access to what data and deliver it to business user
  • Ø Evaluate server performance
  • Ø Reboot UNIX System
  • Ø Reboot Sybase Server
  • Ø Month end tapes should be stored in a special location.
  • Ø Schedule audit of server by another administrator or by security officer
  • Ø Generate hard copy of system / server configuration

NIGHTLY

WEEKLY

MONTHLY

PERIODIC

SYBASE IMPORTANT QUESTIONS

 

Q1: How do you load SQLSERVER in single user mode ?

A1: By giving the command  “load sqlsrvr -m” at the server console.

 

Q2: How do you drop a corrupted database ?

A1: Use the command “dbcc dbrepair(database_name,dropdb)”.

 

Q3: What are the various database options ?

A3: the various options available in Sybase System 10.0 are:

      1) select into/bulkcopy.

      2) ddl in tran.

      3) allow nulls by default.

      4) read only.

      5) single user.

      6) dbo use only.

      7) abort tran on log full.

      8) truncate log on checkpoint.

      9) no checkpoint on recovery.

     10) auto identity.

     11) no freespace accounting.

     

Q4: Whenever the space in a particular segment becomes full, a message must be displayed informing

A4: Create a threshold on a segment and write appropriate code in the procedure of that threshold.

   

Q5: What does a checkpoint do?

A5: A checkpoint does the following :           

       1) Commits all the transactions ie Writes all the committed transactions

          physically on the disk.

       2) Makes an entry in the syslogs so that the recovery becomes easier.

      

Q6: Can one create views on temporary tables?

A6: No. We cannot create views on temporary tables. 

 

Q7: What are the differences between Batch and Procedure?   

A7: “batch” is one or more transact sql statement terminated by end-of-batch signals.

 

 “Procedures” are collection of sql statements and control of flow language Procedures: faster performance, reduced network traffic, better control for sensitive updates, and modular programming.

 

uses :- call other procedures;

                        execute remote sql server;

                        ability to write the power, efficiency and flexibility of sql

Q8: How do you lock a database?

A8: By setting the database option “dbo use only”.

 

Q9: Can you do bulk copy on temp tables?

A9: No. Bulk copy cannot be performed on temporary tables.

 

Q10: What is the default group offered by sybase?

A10: “public” is the default group.

 

Q11: To how many groups can a user belong to?

A11: One only.

 

Q12: How do you detect a deadlock?

A12: Sql server displays a message when a deadlock occurs. The message number is 1204. One victim is selected and his process is rolled back. This user must submit the

     process again.

 

Q13: How do you execute a batch of commands in one statement ?

A13: By creating a procedure which embeds all the statements,

     and calling the procedure.          

 

Q14: What will you do to allow nulls in a table without specifying the same in a create table statement?

A14: set the db_option “allow nulls by default”, true.

 

Q15: Can you dump a database/transaction to an operating system file ?

A15: Yes. Dump database/transaction database_name to “physical_path”

 

Q16: What are the different roles available in sybase ?

A16: There are six roles in sybase. They are:    

       1) sa role.

       2) sso role.

       3) oper role.

       4) sybase_ts role.

       5) replication role.

       6) navigation role.

      

Q17: How is sa_role different from sso_role?

A17: A person having sso_role can do the following:

      1) create/delete logins.

      2) change/set  the passwords of the users.

      3) manage remote logins.

      4) grant sso_role to other users.

    

     A person having the sa_role in sybase is considered as the super user of the system. He can do everything except the ones listed above.

         

Q18: What is the difference between a primary key and a unique key ?         

A18: A column defined as a primary key does not allow null values. But   a column defined as a unique key allows one null value. Primary key  by default creates a clustered index whereas a unique key creates a  non clustered index.

    

Q19: What are dirty reads?

A19:  A ‘dirty read’ occurs when one transaction modifies a row and then a  second transaction reads that row before the first transaction  commits the change. If the first transaction rolls back the change,  the information read by the second transaction becomes invalid is called dirty reads.

  

Q20: How do I insert duplicate rows in a unique index?         

A20: You can insert duplicate rows in a unique index by specifying the option “allow duplicate rows ” in the create index statement.

    

Q21: Which index you prefer for a table that has lot of updations and insertions?

A21: A non-clustered index. Because the clustered index rewrites the whole data in the table based on updations and insertions.

   

Q22: What are segments? What are the uses of segments?

A segment is a named sub-set of a database device. It is basically used in fine tuning or optimizing the performance of the server. Placing a table on one segment and its non-clustered index on another segment causes the reads and writes to be faster. Similarly placing a database on one segment and its log on separate segment ensures that there is no disk contention during physical reads/writes and logging.

    

Q23: What are procedures and what are the uses of procedures?

A23: They are system procedures and stored procedures. Uses of procedures are

                        # Take parameters

                        # Call other procedures

                        # Return a status value to a calling procedure or batch

                          to indicate success or failure, and the reason for failure.

                        # Return values of parameters to a calling procedure or batch

                        # be executed on remote sql Servers                                                              

 

Q24: What is the difference between executing a set of statements in a batch and executing the statements in a procedure?     

A24: batch takes more time than procedure, because it writes in syslogs.

 

Q25: What will you do if one out of five users complaints that his system is working slow?

A25: BATCH file PROCESS GOING ON

 

Q26: A transaction T1 is defined in a procedure X. X calls another procedure Y.

     Can one issue “rollback T1″ statement, in procedure Y?        

A26: Yes.

 

Q27: What is the difference between truncate and delete?    

A27: Truncate:-  truncate table deletes the values and it not recover, make no entry in syslogs  and faster than delete

             Delete:- delete table it deletes all and it is recoverable, useing rollback statement make entry in syslogs  and slow also

 

Q28: Can one use DDL commands in a transaction ?

A28: Yes, by setting the database option “ddl in tran”,true.

 

Q29: What are the restrictions on updating a table through views ?

A29: An update operation to any column in the view, is not allowed

 

Q30: What do you do after issuing the sp_configure command ?

A30:  Issue Reconfigure with override. 

 

Q31: What is Intent lock?

A31: intent locks indicate the intention to acquire a shared or exclusive lock on a data page. Intent locks are used to prevent other transactions from acquiring shared or exclusive locks on the given page it is a table lock.

 

Q32: How do I call a remote procedure?

A32:Execute remote_server_namecall.database_name.object_owner_name.procedure_name

 

Q33: Why do you dump database after issuing dump tran with no_log statement?

A33: Dump tran database_namewith no_log clears the log without dumping it. Therefore

     complete recovery becomes impossible if the database fails or gets corrupted.

     To have a copy of all changes made, we should dump database.

 

Q34: What databases are created during installation?

A34: Four databases are created during installation. They are

      a) master b) model c) tempdb d) sybsystemprocs.

      options is pubs2, sybsyntax databases.

     

Q35: How to display the current users role?

A35: By executing sp_displaylogins user_name or select show_roles().     

             sp_displaylogin (login_name)

             select show_roles(user_name) by default current user_name

 

36.  When a record is deleted, what happens to the remaining records on that page?

A36. No physical movement of data occurs at the time of record is deleted. The record is tagged for future physical deletion.

 

37.  When does sybase reduce the amount of space allocated to an object when a large number of records have been deleted?

A37. When no more records reside on an extent, the extent is returned to the pool for use by other objects. There are several ways to accomplish this. One way is to drop the clustered index and re-create it. Another way is to bcp out, truncate table (deallocate extents) and then bcp in the data(allocating only enough extents to hold the data)

 

38. Explain what happens when clustered index is created?

    * Physically sort the data

    * Sufficient amount of space (approximately 1.2 times to actual data)

      is required for the sorting process.

 

39. What happens when non-clustered index was created?

A leaf level is created by copying the specified index columns. The leaf level is sorted and uses pointers to the associated data pages.

 

40. When you install Sybase SQL Server what other server needs to be installed?

    The Backup Server

 

41. Why would we define a fill factor when creating an index?

 

42. How do we increase the size of database?

A42. Alter database

 

43. What utility does sybase use to import large volumes of data?

A43. BCP

 

44. When fast bcp is used to load data, what effect does it have on the transaction log?

A44. When fast BCP is used syb does not log any transactions. instead logs the pages that are written in case of failure.

 

45.  What is stored in syslogs?

A45. the transaction log

 

46. Would frequent transaction log dumps be used for an application classified as decision support or on line transaction processing?

A46. On line transaction processing

 

47. What happens when we try to create unique index on a column that contains duplicate values?

 

48. Does Syb allows null values in a column with unique index?

        Yes. one null value, use not null constraint while creating table

 

49. when creating a nonunique clustered index, why would we use the

     ‘ignore_dup_rows’ option?

A49. We can complete update the process purpose.

    

50. What does the ‘update statistics’ do?

A50. It updates the all transactions, page allocations.

 

51. Describe some scenarios that would cause the transaction log to fill up.

        * transaction log not dumped often enough

        * when a single insert, update or delete affects large data

        * when a transaction remains open for long time

 

52. What are the DBCC commands ?

A52. dbcc checktable table_name { checks a specific table’s consistency}

     dbcc checkdb db_name  { checks all tables for a database}

     dbcc checkcatalog db_name  { checks system tables}

     dbcc checkalloc db_name      { checks page allocations}

     dbcc tablealloc   table_name  { checks table allocation pointers}

     dbcc indexalloc db_name       { checks index page pointers}

     dbcc fix_alloc  db_name          { fixes allocation pages reported by checkalloc}

     dbcc dbrepair (database_name, dropdb)   {drop a corrupt database}

 

53. what is the command “dbcc dbrepair” used for?

A53. If the database corrupted, then we can repair through dbcc dbrepair.

 

54. why should we separate transaction log and database on to separate physical  devices?

A54.Improve Performance Both Read And Write Large Table Read At A Time Text And Image Data Improve Performance When The         Tableis Heavily Used.  Manage The Size Of Objects Within The Database     Retrevals Fast.

 

55. when would you use ‘dump tran inward with no_log’?

A55. If the transactions log is full.

 

56.  If an old transaction remains open and is causing the log to fill up, what should you do?

A56. kill that opened process

 

57. what is the role of sysusages table in MASTER DATABASE?

57A: The creation of a new database is recorded in the master database tables      sysdatabases and sysusages.

 

58. what is the system procedure created by the user that monitors the space

    usage on the segments and dumps the log when the last-chance thresold is

    reached?

A58. sp_thresholdaction

 

59. what is the last-chance threshold?

 

60. what is the recovery interval?  is an estimate of the time required by sql server to recover in case of system failure.  *go detail in manual

 

61. what does ‘truncate log on checkpoint’ do?

 

62. what happens (internally) when you try to insert a row into a table with clustered index and the data page is full?   page split occurs

      

63. how are rows added to a table that does not have a clustered index? added to the bottom of the table.

 

64. how do we recover the master database after database becomes corrupt?

        * Replace the generic master database “- buildmaster -m”

        * Start the SQL Server in single user mode “- startserver -m”

        * load the most recent dump of master

        * Restart the SQL Server in single user mode

        * Check sysusages, sysdevices and sysdatabases against a recent backup copy.

        * run dbcc checkalloc and dbcc checkdb on all databases

        * dump the master database.

 

65. How to change configure the values?

65A:sp_configure

 

66. If syslogs of a database was full what are the steps taken?

66A: (a)         dump the particular database

                         dump database_name with no_log

                        dump database_name with truncate_only

            (b)       alter database

 

67. What are the difference between clustered and non_clustered indexes?

67A: Clustered Indexes dictate the physical order of data. The leaf level of the clustered index is the data. A Non_Clustered index has a row in the leaf level of the index for every row in the table.

             

68. What does update statement do?

 

69. What are the constraints in sybase?

 

70. What does dump database does and dump tran does?

 

71. Will a file containing rows that have negative values for column b be added during a bulk-copy?

71A: Yes, rules, triggers and constraints are not recognized during bulk-copy operation.             

72. What command you use to change the default value in column b to 5?

72A: Alter table table_name replace b default 5

 

73. what system table contains objects, rules as tables, defaults and triggers within a database?

73A: Sysobjects.

           

74. How many pages are allocated when a table is created?

74A: An extent. which is 8 pages.

 

75. What is difference between varchar and char?

75A:  char is a fixed length data type with training spaces.

      varchar is a variable length data type.

 

76. How many no of triggers can be created on a table?

76A: 3 i.e.  insert,update,delete

77. what is normalization?   difference between normalization and denormalization?

 

77A: Normalization produces smaller tables with smaller rows.

            More rows per page(less logical I/O)

            More rows per I/O(more efficient)

            More rows fit in cash (less physical I/o)

           

Searching, sorting and creating indexes are faster, since tables are       narrower, and more rows fit on a data page.

 

You usually wind up with more tables. You can have more clustered indexes(you get only one per table) so you get more flexibility in tuning queries.

 

Index searching is often faster, since indexes tend to be narrower and shorter.

 

More tables allow better use of segments to control physical placement of data.

 

You usually wind up with fewer indexes per table, so data modification commands are faster.

 

You wind up with fewer null values and less redundant data, making your database more compact.

Triggers execute more quickly if you are not maintaining redundant data.

 

Data modification anomalies are reduced.

 

Normalization is conceptually cleaner and easier to maintain and change as you needs change.

           

While fully normalized databases require more joins, joins are generally very fast if indexes are available on the join columns.  SQL server is optimized to keep higher levels of the index in cache, so each join performs only one or two physical I/Os for each matching row.  The cost of finding rows already in the data cache is extremely low.

           

First Normal Form, Second Normal Form, Third Normal Form.

 

78. What types of relationships in sybase10?

78A: Relations become tables, Attributes become columns, Relationships become data references (primary and foreign key references).

     

79. What is data integrity types?

79A:  entity and referential

 

80. How to update row by row from the database?

80A: THROUGH CURSORS

 

81. What are the system tables in sybase10?

81A:  sybsystemprocs

 

82. What is difference between sybase 4.2 and sybase10?

82A: cursors, sybsystemprocs,

 

83. How many types of locks? explain?

83A: Holdlock, noholdlock, or shared

            Page locks: Shared Locks, Exclusive Locks, Update Locks

            Table locks: Intent lock, shared lock, exclusive lock

            Demand locks: Sql Server sets a demand lock to indicate that a transaction is next in    

            line to lock a table or page.

           

84. Difference between oracle and sybase?

84A: Oracle is GUI, where as sybase does not have a GUI concept

      

85. How to create data type?

85A:sp_addtype

 

86. What type of locking will sybase follow: Row level, Column Level?

86A: Row level

 

87. What is the diff between Implicit and Non-Implicit cursors?.

87A: Implicit cursors are system created cursors; where as Non-implicit cursors [explicit cursors] are user created cursors.

 

88. What are sub-related queries, Diff type of Triggers (Automated Triggers)?.

 

89. What is NLM meant for – Novell Loadable Module?.

 

90. What are diff type of backups in Unix environment?.

 

91. What are diff between MS-SQL server and Sybase Server?.

 

92. How to execute a set of procedures at one time

92A: By calling the other procedure by EXEC command in the main procedure

 

92 (b) How to Audit System or Explain sso task only?

92 (b):  sp_auditoption

            sp_auditdatabase dbname

            sp_auditobject table_name

            sp_auditsproc proc_name

            sp_auditlogin login_name ”     “, on | off

            sp_auditrecord

            sp_configure “audit queue size” , #_audit_records

 

Question & Answers at the time of Installation

 

93. Why I can’t connect to my SQL Server?

A. There are a variety of reasons for connection failure. Here are some things to check :

  

   (a) Make sure that the DSLISTEN parameter in “sybenv.dat” is set to the correct SQL server name.

   (b) Make sure that SQL server is up by entering the following command from the console: display servers

   (c) Check the error log to make sure that SQL server is advertising on the correct port. Remember to use the “cperrlog” utility to make a copy of the error log for viewing.

   (d) Make sure that the server name entries in “sybenv.dat” and the interfaces file match. Server name are case-sensitive.

      

94. Why can’t I load isql?

A. Your interfaces file may contain inaccurate information or be improperly formatted.

   Check your interfaces file for the following:

   (a) Blank lines

   (b) Network numbers that do not match the one Netware is using.

Your interfaces file must contain the address that the Netware file server advertises, which is in “autoexec.ncf” file.

For SQL server running on an SPX network, the IPX internal number is most critical. For SQL server running on a TCP/IP network, the IP number is critical.

   (c) Always use “sybinst” in SQL server 4.2.2 and “sybinit” in

       SQL server 10.0.x or 11.0.x to change the interfaces file.

      

95. What do I do when my “disk init” command fails with an error message about lack of space?

A. Configure your Netware environment to ensure that deleted file are actually purged from the system, allowing the space used by those files to be returned to the system. You can do this immediately by typing the “purge/all” command from the root directory of a client machine. To set up your environment to perform automatic, immediate purges, put   the following lines in your “sys:systemautoexec.ncf” file:

         

           set immediate purge of deleted files = on

 

96. When existing from “isql” NLM, from other utilities, or from SQL Server

   using a “quit” command, this message appears:

How can I have this message automatically answered? I want to unload utilities or the SQL server automatically, but as it is, an operator must be present to press a key to close the screen.

A. Invoke “isql” using the -k flag in the command; for example:

            load isql -k

           

97. What can I do to prevent Netware SQL server from monopolizing my m/s CPU’s

A. Invoke the “sqlsrvr” NLM with “-P” flag                                                                   

   Consider using the -P option only when you are running complex queries

   that invoke large amounts of data and you have the following problems:

   (a) SQL server is dropping existing connections

   (b) The Netware Console is hanging

   (c) The Netware system clock is running slowly

   Here is the syntax:

        sqlsrvr -Pnumber

   The number parameter controls the frequency with which the SQL server

   relinquishes the CPU to other Netware processes. The lower the value

   of number, the more often SQL server will relinquish the CPU. The

   default value of number is 3000.

 

98. Suppose my “autoexec.ncf” contains incorrect syntax or invokes a

   mal-functioning executable. How do I suppress execution of the

   “autoexec.ncf” configuration file when starting up my Netware server?

A. Start your Netware server with the -na(no autoexec) option on the

   command line:      server -na

 

99. When I attempt to dump a database, Netware returns the following error:

       DFSExpandfile failed..

   What does this error message mean ?

A. There is insufficient space on the volume to which you are dumping. Free additional space on the volume by decreasing the time the Netware file server waits before it actually deleted a file. Enter the following commands from the console :

  

                                    set file delete wait time = 60 sec

                                    set minimum file delete wait time = 30 sec

 

100. How can I control whether or not SQL server creates a dynamic socket in

     situations in which the interfaces file is corrupted or missing?

A. The USE_DEFAULT_SPX parameter in “sybenv.dat” controls whether or not SQL server will create a dynamic socket.   If USE_DEFAULT_SPX is set to TRUE, SQL server will dynamically generate and allocate a network address using SAP (Netware Service Advertising Protocol) if no server information is found in the interfaces file. This    means that applications that except SQL server to listening on a particular socket may not be able to connect to the server. Applications must use the Netware bindery in order to connect to a dynamic socket.

  

101. Why do I see two network handler entries in my “SP_WHO” display?

A. Your SQL server supports both SPX and TCP protocols, and there are two entries in your interfaces file.

 

102. Can my Backup server and SQL server share a port number?

A. No. “sybinit” shows a default port number for a Backup server that is the same as that for SQL server.

   Do not accept the default; enter the appropriate port number(in hexadecimal)

 

   PROTOCOL        SQL Server Port           Backserver Port

   ================================================================

    SPX             0x83db                     0x83be

    TCP             0x1000                     0x1001

   ================================================================

 

103. Why is there a slight delay when my SQL server boots on a large system?

A. SQL server must allocate and then deallocate memory. For example, depending on the platform, a SQL server with 48MB of memory may take up to one minute to initialize.

 

104. Does SQL server directly support Netware Version 4.01 Directory Services?

A. No if you using Netware 4.01, install Netware Version 4.01, directory services with the bindery Emulation mode.

  

105. Can I reload SQL server 11.0.x into the existing installation directory?

A. No. You must first remove the existing installation directory or load in another location. Otherwise, the load will fail because there are read-only files in the directory that cannot be updated.

 

123. Log is full, you want to add a device in sysdevices what will you do?

            first unsuspend the log i.e.

           

            select lct_admin(“unsuspend”,database_id)

           

            disk init

            sp_addsegment logsegment, database_name, device_name

            dump tran database with no_log

            dump tran database with truncate only

            alter the database

 

124. What are the uses of segments?

SEGMENTS: IMPROVE PERFORMANCE BOTH READ AND WRITE LARGE TABLE 

READ AT A TIME TEXT AND IMAGE DATA IMPROVE PERFORMANCE WHEN THE TABLE IS HEAVILY USED.  MANAGE THE SIZE OF OBJECTS WITHIN THE DATABASE

retrevals fast,

sp_placeobject segment_name, object_name

log and datasegments on two different disks

 

125. shutdown with no wait what will happens

A.  it disables the logins, does not issue checkpoint, and it terminates current transactions.

 

137.difference between truncate & delete

            truncate :- truncate table deletes the values and it not recover,

            make no entry in syslogs  and faster than delete

            delete :-   delete table it deletes all and it is recoverable,

            using rollback statement make entry in syslogs  and slow also

           

138. what is intent lock?

intent locks indicate the intention to acquire a shared or exclusive lock on a data page. Intent locks are used to prevent other transactions from acquiring shared or exclusive locks on the given page it is a table lock.

 

139. how to call a remote procedure

exec remote_servername.database_name.owner.stored procedure

 

140.why do you take database backup after dump tran database_name with no_log

it clears log’s,so the recovery can’t be made upto date without database backup

 

142. what are segments?

segments are a subset of database devices maped to particular database

system,logsegment,default segments

 

143. what is fill factor?

fillfactor specifies how full to make each page when you created a new index.

 

144. How to set Sybase_Ts_Role?

sp_role “grant”,Sybase_Ts_Role, sa

go

set role “Sybase_Ts_Role” on

go

 

145. What information you will get with sp_spaceused?

A145.  we get the information of master database i.e

               database_name,database_size,reserved,data,index_size,unused space.

 

146. Why not index every column in a table?

A146.The most significant reason is that building an index takes time and storage space.  A second reason is that inserting, deleting or updating data into indexed columns takes a longer time than for un-indexed columns.    

 

SYBASE SELECT STATEMENT SYNTAX’S

——————————–

select all/distinct into db/table from db/table

where ……

group by …

having…

order by …db/table/view(column_name)

compute…

 

select * from db/table

select * into table from db/table

select a.*,b.* into table from db/table where a….. = b…..

select shares* 0.50 into table from table

select * into table from table where column between…. and …..

select * into table from table where column not between…. and …..

select * into table from table where in (select * from table)

select * into table from table where in (select * from table where  …..)

select * into table from table where not in (select * from table where  …..)

select * from table where in (select * from table where in (select * from table where  …..) )

select * into table from table where column like ‘   %’

select * into table from table where column not like ‘   %’

select * into table from table where copy like (‘%column%’

select sum(column) from table

select avg(distinct price) from table where……

select * into table from table where column not like ‘   %’

select * into table from table where …..group by….column

select * into table from table where …..group by (column) having count(*) > 1

select * into table from table where …..group by (column) having count(*) > 1 order by ……

select * into table from table where …..group by (column) having count(*) > 1 order by ……compute sum(price) by column

select * from t1 union select * from t2

select a.*,b.* into table from db/table where a….. *= b…..(include 1 table)

select a.*,b.* into table from db/table where a….. =* b…..(include 2 table

select a.*,b.* into table from db/table where a….. != b…..(any table)

select a.*,b.* into table from db/table where a..!=any (select * from tab2)

select * into table from db/table where exists(select * from tab2)

select * into table from db/table where not exists(select * from tab2 where…)

sp_dboption database_name,’option’,’true/false”

sp_addsegment

                       

 

DATA BASE ERRORS

—————-                       

            update status = 4

            commit tran

            use master

            checkpoint

            shutdown with nowait

            isc

            update sysdatabase net  status = -36 where name = (databasename)

            isc

            dbcc traceon(3605)

            dbcc dbrecover(“bob”)

            use bob

            dump tran bob with no_log

            checkpoint

            use master

            update sysdatabases set status = 4 where name = (databasename)

            use master

            checkpoint

            shutdown

 

What is good Performance?

Performance is the measure of efficiency of an application or multiple applications running in the same environment.  Performance is usually measured in response time and throughput.

 

What is Tuning?

Tuning is optimizing performance.  A system model of SQL Server and its environment can be used to identify performance problems at each layer. The tuning layers in SQL Server are:

Applications layer     – most of performance gains come from query tuning,

                                                                         based on good database design.

 

Database layer         – applications share resources at the database layer,

                                                                         including disks, the transaction log, data cache.

                                                                         

Server layer           – data and procedure cashes, locks, CPUs

 

Devices layer          – disk and controllers that store your data

 

Network layer          – connect users to SQL Server

 

Hardware layer                     – The CPU or CPUs available

 

Operating system layer – Ideally, Sql Server, backup server, SQL Server Monitor.

 

 

What is Difference between function and procedure?

function is right the values, and where as procedure run sql statements.

 

What is key?

 

What is Difference cursors and Triggers?

 

1.Problem

A user is working in PD² and the following error appears.

 

Figure 1: Error 1105

Error: “Can’t allocate space for object ‘syslogs’ in database ‘<database_name>’ because the ‘logsegment’ segment is full. If you ran out of space in syslogs, dump the transaction log. Otherwise, use ALTER DATABASE or sp_extendsegment to increase size of the segment.”

ErrCode: 1105″

The user ran out of space in the transaction log on the database.

2.Verification

2.1Using Sybase Central

Connect to the server using Sybase Central. After logging in as ‘sa’, double click on the folder named Databases. Locate the database that is listed in the error message and double-click on it. When the list of folders appears select the sSegments folder (See Figure 2). The segments folder lists the three segments that make up the database. Those segments are default, logsegment and system segment. The logsegment is used to store transaction log information. If your database is set to “truncate log on checkpoint” then this segment should not fill up under normal circumstances. (However, in this example this option has been turned off.) Refer back to the error message and verify which segment is full. If the error mentioned the logsegment then there would be a 0.00 (or close to 0.00) in the column under Free (MB).

 

Figure 2: Segments Folder

There is a second method for verifying available log space using Sybase Central. After logging into the server right click on the server name and select “Log Space” from the pop-up menu. Or you can highlight the server name and select “Log Space” from the File menu at the top of the window. A Server Log Space window will appear listing the current size log and percent used for each database on the server (See Figure 3). Locate the database that is listed in the error message. If the logsegment for that database is full then the percent used column will read 100% (or close to 100%).

 

Figure 3: Log Space Window

2.2Using SQL Advantage

In WISQL or SQL Advantage execute the following command.

sp_helpdb <dbname>

<dbname> = the name of the database which has a full transaction log.

 

Figure 4: Results from sp_helpdb

When the results come back note the information under column entitled “free kbytes”. This number will be 0 (or close to 0) if the logsegment is full.

3.Solution

3.1Dumping the Transaction Log

Whenever a command is issued that inserts, updates or deletes a row in a table, the transaction is also sent to the transaction log. As each transaction is committed to the database the amount of space in the log is reduced. If your site backs up its transaction data dumping the transaction log frees up space by writing committed transactions to disk and then removing them from the log. When the transaction log fills up, it must be dumped or truncated before anyone can continue working in the system.

There are four ways to dump the transaction log.

  • §Dump transactions to a dump device. This stores the committed transactions to a file that can be used in conjunction with the last database backup to make a full recovery of the database.
  • §Dump transactions with truncate_only. This will truncate the transaction log without copying the committed transactions to a file.
  • §Dump transactions with no_log. This will truncate the log without recording the event. This should only be done when the other dump transaction commands fail because of insufficient log space.
  • §Reboot the server. This will terminate all uncommitted transactions and allow you to truncate the log. Do this when all other attempts to dump the transaction log fail.

Note: When the “truncate log on checkpoint” option is on, you cannot dump the transaction log to a dump device because changes to your data are not recoverable from transaction log dumps. In this situation, issuing the “dump transaction…to” command produces an error message instructing you to use dump database instead.

3.1.1Dumping the Transaction Log to a Dump Device

If you have created dump devices for backing up the transaction log then execute the following command in WISQL or SQL Advantage to dump your transaction log.

dump tran <database_name> to <device_name>

<database_name> = the name of the database that is associated with the log that needs to be dumped.

<device_name> = the name of your dump device.

Example

dump tran SPS_M00001_DB to TRANS_BACKUP

go

 

Execute the following command in WISQL or SQL Advantage if you want to use a physical file to backup up your transaction log.

dump tran <database_name> to “<physical_file>”

<database_name> = the name of the database that is associated with the log that needs to be dumped.

<physical_file> = the directory and file name of the .dmp file.

Example

dump tran SPS_M00001_DB to “c:sybasebackupTRANS_BACKUP.dmp”

go

Note: If the physical file exists, the previous backup will be over written. If the physical file does not exist then one will be created.

Note: This command will not work if the “truncate log on checkpoint” option is turned on in the database.

3.1.2Dumping the Transaction Log with Truncate Only

If you are not currently keeping backups of your transaction log and you do not need to retain this information, then you can execute the following command in WISQL or SQL Advantage to dump your transaction log without making a copy of it. This will remove committed transaction from the log while retaining uncommitted transactions.

dump tran <database_name> with truncate_only

<database_name> = the name of the database that is associated with the log that needs to be dumped.

Example

            dump tran SPS_M00001_DB with truncate_only

            go

Note: This command will not work if there is no space remaining in the transaction log because it requires a small amount of space in the transaction log in order to run.

3.1.3Dumping the Transaction Log with No Log

If your Transaction Log is completely full (i.e. space remaining is 0.00MB) then you will not be able to dump the transaction log using any of the previously described methods. At this point you must execute the following command in WISQL or SQL Advantage to dump your transaction log. The following command will not store the data in the transaction log to a file and the “dump tran” action will not be recorded in the error log.

dump tran <database_name> with no_log

<database_name> = the name of the database that is associated with log that needs to be dumped.

Example

dump tran SPS_M00001_DB with no_log

go

Note: Sybase recommends that you backup your database immediately after performing this function.

3.1.4Rebooting the Server

If all other attempts to dump the transaction log fail, then you can reboot the Sybase server to terminate all uncommitted transactions. Once the server has restarted, log back into SQL Advantage and use one of the methods identified in Section 0: Verification to check the current space available in the log. If the log is still full then use one of the methods described above to dump the transaction log.

3.2Setting Database Options

After you have successfully dumped the transaction log, take a few moments to verify the current database settings. As stated in Section 0: Verification, the transaction log will not fill up if the “truncate log on checkpoint” option is set on the database.

Note: For sites that perform a routine dump of their transaction log this option will not be set. However, for other sites that are not backing up their transaction logs, AMS recommends that this option be set in order to avoid this problem.

In order to verify whether the “truncate log on checkpoint” option is set execute the following command in WISQL or SQL Advantage.

sp_helpdb

Locate the name of your database and scroll over to the status column. In most cases the option will read “select into/bulkcopy, trunc log on chkpt”. If these options are not set then follow these instructions to set the options.

To change a database option in a PD² database you must execute the sp_dboption procedure from the master database.

The syntax for sp_dboption is as follows:

sp_dboption <dbname>, “<optname>”, {true | false}

<dbname> = the name of the database for which you are setting the option.

<optname> = the name of the option that you want to set.

<true|false> = the choice of setting. True = on. False = off.

Example

sp_dboption SPS_M00001_DB, “trunc log”, true

sp_dboption SPS_M00001_DB, “select into”, true

After setting the options run the checkpointcommand in your database for the changes to take effect.

use <dbname>

checkpoint

<dbname> = the name of the database for which you have changed the option.

3.3Increasing the Size of Your Transaction Log

In some cases you may need to increase the size of your transaction log to prevent it from filling up. For upgrade purposes, AMS recommends that the size of your transaction log be as big as the largest  (non-text/image-containing) table in the database. But in most cases the transaction log is installed with 100MB of space.

If you need to increase the size of your transaction log or add a transaction log to a database that does not have one then follow these steps.

1.Create a database device[1]. Be sure to use the proper naming convention when creating a device for transaction log data. Example: log_<database_name>_ADD1. This identifies the device as the first addition to the log for the database.

2.Attach the data device to the database1. Attaching a device for storing transaction log information differs slightly from attaching a device for storing data.

3.3.1Using Sybase Central

When you arrive at the screen where you are prompted to select an available device, be sure to select the “Transaction Log” radio button as your device type.

 

1. Select Device Type

3. Select Device Size

2. Select Device

Figure 5: Add Device Window

3.3.2Using WISQL or SQL Advantage

You can attach a transaction log or increase the size of an existing transaction log on a database by executing the following command in WISQL or SQL Advantage.

alter database <dbname> log on <devname> = <size>

<dbname> = the name of the database.

<devname> = the name of the new log device that you created.

<size> = the size of the device in MB.

Example

alter database SPS_M00001_DB log on log_SPS_M00001_DB_ADD1 = 100

1.3.1 How to clear a log suspend

 

A connection that is in a log suspend state is there because the transaction that it was performing couldn’t be logged. The reason it couldn’t be logged is because the database transaction log is full. Typically, the connection that caused the log to fill is the one suspended. We’ll get to that later.

 

In order to clear the problem you must dump the transaction log. This can be done as follows:

 

    dump tran db_name to data_device

    go

 

At this point, any completed transactions will be flushed out to disk. If you don’t care about the recoverability of the database, you can issue the following command:

 

    dump tran db_name with truncate_only

 

If that doesn’t work, you can use the with no_log option instead of the with truncate_only.

 

After successfully clearing the log the suspended connection(s) will resume.

 

Unfortunately, as mentioned above, there is the situation where the connection that is suspended is the culprit that filled the log. Remember that dumping the log only clears out completed transaction. If the connection filled the log with one large transaction, then dumping the log isn’t going to clear the suspension.

 

System 10

 

What you need to do is issue an ASE kill command on the connection and then un-suspend it:

 

    select lct_admin(“unsuspend”, db_id(“db_name”))

 

System 11

 

See Sybase Technical News Volume 6, Number 2

 

Retaining Pre-System 10 Behaviour

 

By setting a database’s abort xact on log full option, pre-System 10 behaviour can be retained. That is, if a connection cannot log its transaction to the log file, it is aborted by ASE rather than suspended.

 

Limiting Access to Server Resources

This chapter describes how to use resource limits to restrict the I/O cost, row count, or processing time that an individual login or application can use during critical times. It also describes how to create named time ranges to specify contiguous blocks of time for resource limits.

What are resource limits?

Adaptive Server provides resource limits to help System Administrators prevent queries and transactions from monopolizing server resources. A resource limit is a set of parameters specified by a System Administrator to prevent an individual login or application from:

  • Exceeding estimated or actual I/O costs, as determined by the optimizer
  • Returning more than a set number of rows
  • Exceeding a given elapsed time

The set of parameters for a resource limit includes the time of day to enforce the limit and the type of action to take. For example, you can prevent huge reports from running during critical times of the day, or kill a session whose query produces unwanted Cartesian products.

Planning resource limits

In planning a resource limit, consider:

  • When to impose the limit (times of day and days of the week)
  • Which users and applications to monitor
  • What type of limit to impose
    • I/O cost (estimated or actual) for queries that may require large numbers of logical and physical reads
    • Row count for queries that may return large result sets
    • Elapsed time for queries that may take a long time to complete either because of their own complexity or because of external factors such as server load
  • Whether to apply a limit to individual queries or to specify a broader scope (query batch or transaction)
  • Whether to enforce the I/O cost limits prior to or during execution
  • What action to take when the limit is exceeded (issue a warning, abort the query batch or transaction, or kill the session)

After completing the planning, use system procedures to:

  • Specify times for imposing the limit by creating a named time rangeusing sp_add_time_range
  • Create new resource limits using sp_add_resource_limit
  • Obtain information about existing resource limits using sp_help_resource_limit
  • Modify time ranges and resource limits using sp_modify_time_range and sp_modify_resource_limit, respectively
  • Drop time ranges and resource limits using sp_drop_time_range and sp_drop_resource_limit, respectively

Enabling resource limits

Configure Adaptive Server to enable resource limits. Use allow resource limits configuration parameter:

sp_configure "allow resource limits", 1

1 enables the resource limits; 0 disables them. allow resource limits is static, so you must restart the server to reset the changes.

allow resource limits signals the server to allocate internal memory for time ranges, resource limits, and internal server alarms. It also internally assigns applicable ranges and limits to login sessions.

Setting allow resource limits to 1 also changes the output of showplan and statistics i/o, as follows:

  • showplan displays estimated I/O cost information for DML statements. The information displayed is the optimizer’s cost estimate for the query as a unitless number. The total estimated I/O cost is displayed for the query as a whole. This cost estimate is dependent on the table statistics (number and distribution of values) and the size of the appropriate buffer pools. It is independent of such factors as the state of the buffer pools and the number of active users. For more information, see “Messages describing access methods, caching, and I/O cost” on page 787 in the Performance and Tuning Guide.
  • statistics i/o includes the actual total I/O cost of a statement according to the optimizer’s costing formula. This value is a number representing the sum of the number of logical I/Os multiplied by the cost of a logical I/O and the number of physical I/Os multiplied by the cost of a physical I/O. For more information on these numbers, see “How Is “Fast” Determined?” in the Performance and Tuning Guide.

Defining time ranges

A time range is a contiguous block of time within a single day across one or more contiguous days of the week. It is defined by its starting and ending periods.

Adaptive Server includes predefined “at all times” range, which covers the period midnight through midnight, Monday through Sunday. You can create, modify, and drop additional time ranges as necessary for resource limits.

Named time ranges may overlap. However, the limits for a particular user/application combination may not be associated with named time ranges that overlap. You can create different limits that share the same time range.

For example, assume that you limit “joe_user” to returning 100 rows when he is running the payroll application during business hours. Later, you attempt to limit his row retrieval during peak hours, which overlap with business hours. You will get a message that the new limit failed, because it would have overlapped with an existing limit.

Although you cannot limit the row retrieval for “joe_user” in the payroll application during overlapping time ranges, nothing stops you from putting a second limit on “joe_user” during the same time range as the row retrieval limit. For example, you can limit the amount of time one of his queries can run to the same time range that you used to limit his row retrieval.

When you create a named time range, Adaptive Server stores it in the systimeranges system table to control when a resource limit is active. Each time range has a range ID number. The “at all times” range is range ID 1. Adaptive Server messages refer to specific time ranges.

Determining the time ranges you need

Use a chart like the one below to determine the time ranges to create for each server. Monitor server usage throughout the week; then indicate the periods when your server is especially busy or is performing crucial tasks that should not be interrupted.

Day

00:00

01:00

02:00

03:00

04:00

05:00

06:00

07:00

08:00

09:00

10:00

11:00

12:00

13:00

14:00

15:00

16:00

17:00

18:00

19:00

20:00

21:00

22:00

23:00

00:00

Mon

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Tues

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Wed

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Thurs

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Creating named time ranges

Create new time ranges use sp_add_time_range to:

  • Name the time range
  • Specify the days of the week to begin and end the time range
  • Specify the times of the day to begin and end the time range

For syntax and detailed information, see sp_add_time_range in the Reference Manual.

A time range example

Assume that two critical jobs are scheduled to run every week at the following times.

  • Job 1 runs from 07:00 to 10:00 on Tuesday and Wednesday.
  • Job 2 runs from 08:00 on Saturday to 13:00 on Sunday.

The following table uses “1” to indicate when job 1 runs and “2” to indicate when job 2 runs:

Day

Time

00:00

01:00

02:00

03:00

04:00

05:00

06:00

07:00

08:00

09:00

10:00

11:00

12:00

13:00

14:00

15:00

16:00

17:00

18:00

19:00

20:00

21:00

22:00

23:00

00:00

Mon

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Tues

 

 

 

 

 

 

 

 

1

1

1

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Wed

 

 

 

 

 

 

 

 

1

1

1

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Sat

 

 

 

 

 

 

 

 

 

2

2

2

2

2

2

2

2

2

2

2

2

2

2

2

2

2

Sun

 

2

2

2

2

2

2

2

2

2

2

2

2

2

2

 

 

 

 

 

 

 

 

 

 

 

Job 1 can be covered by a single time range, tu_wed_7_10:

sp_add_time_range tu_wed_7_10, tuesday, wednesday, "7:00", "10:00"

Job 2, however, requires two separate time ranges, for Saturday and Sunday:

sp_add_time_range saturday_night, saturday, saturday, "08:00", "23:59"
sp_add_time_range sunday_morning, sunday, sunday, "00:00", "13:00"

Modifying a named time range

Use sp_modify_time_range to:

  • Specify which time range to modify
  • Specify the change to the days of the week
  • Specify the change to the times of the day

For syntax and detailed information, see sp_modify_time_range in the Reference Manual.

For example, to change the end day of the business_hours time range to Saturday, retaining the existing start day, start time, and end time, enter:

sp_modify_time_range business_hours, NULL, Saturday, NULL, NULL

To specify a new end day and end time for the before_hours time range, enter:

sp_modify_time_range before_hours, NULL, Saturday, NULL, "08:00"

You cannot modify the “at all times” time range.

Dropping a named time range

Use sp_drop_time_range to drop a user-defined time range

For syntax and detailed information, see sp_drop_time_range in the Reference Manual.

For example, to remove the evenings time range from the systimeranges system table in the master database, enter:

sp_drop_time_range evenings

You cannot drop the “at all times” time range or any time range for which resource limits are defined.

When do time range changes take effect?

The active time ranges are bound to a login session at the beginning of each query batch. A change in the server’s active time ranges due to a change in actual time has no effect on a session during the processing of a query batch. In other words, if a resource limit restricts query batches during a given time range, but the query batch begins before that time range becomes active, the query batch that is already running is not affected by the resource limit. However, if you run a second query batch during the same login session, that query batch will be affected by the change in time.

Adding, modifying, and deleting time ranges does not affect the active time ranges for the login sessions currently in progress.

If a resource limit has a transaction as its scope, and a change occurs in the server’s active time ranges while a transaction is running, the newly active time range does not affect the transaction currently in progress.

Identifying users and limits

For each resource limit, you must specify the object to which the limit applies.

You can apply a resource limit to any of the following:

All applications used by a particular login

All logins that use a particular application

A specific application used by a particular login

where application is defined as a client program running on top of Adaptive Server, accessed through a particular login. To run an application on Adaptive Server, you must specify its name through the CS_APPNAME connection property using cs_config (an Open Client Client-Library application) or the DBSETLAPP function in Open Client DB-Library. To list named applications running on your server, select the program_name column from the master..sysprocesses table.

For more information about the CS_APPNAME connection property, see the Open Client Client-Library/C Reference Manual. For more information on the DBSETLAPP function, see the Open Client DB-Library/C Reference Manual.

Identifying heavy-usage users

Before you implement resource limits, run sp_reportstats. The output from this procedure will help you identify the users with heavy system usage. For example:

sp_reportstats
Name    Since         CPU     Percent CPU  I/O     Percent I/O
------  -----------   -----   ------------ -----   -------------
probe   jun 19 1993   0       0%           0       0%
julie   jun 19 1993   10000   24.9962%     5000    24.325%
jason   jun 19 1993   10002   25.0013%     5321    25.8866%
ken     jun 19 1993   10001   24.9987%     5123    24.9234%
kathy   jun 19 1993   10003   25.0038%     5111    24.865%
Total CPU   Total I/O
---------   ---------
40006       20555

The output above indicates that usage is balanced among the users. For more information on chargeback

Identifying heavy-usage applications

To identify the applications running on your system and the users who are running them, query the sysprocesses system table in the master database.

The following query determines that isql, payroll, perl, and acctng are the only client programs whose names were passed to the Adaptive Server:

select spid, cpu, physical_io,
  substring(user_name(uid),1,10) user_name,
  hostname, program_name, cmd 
from sysprocesses
spid  cpu    physical_io  user_name hostname program_name cmd
----  ---    -----------  --------- -------- ------------ ------
  17    4          12748  dbo       sabrina  isql         SELECT
 424    5              0  dbo       HOWELL   isql         UPDATE
 526    0            365  joe       scotty   payroll      UPDATE
 568    1           8160  dbo       smokey   perl         SELECT
 595   10              1  dbo       froth    isql         DELETE
 646    1              0  guest     walker   isql         SELECT
 775    4          48723  joe_user  mohindra acctng       SELECT
 
(7 rows affected)

Because sysprocesses is built dynamically to report current processes, repeated queries produce different results. Repeat this query throughout the day over a period of time to determine which applications are running on your system.

The CPU and physical I/O values are flushed to the syslogins system table periodically where they increment the values shown by sp_reportstats.

After identifying the applications running on your system, use showplan and statistics io to evaluate the resource usage of the queries in the applications.

If you have configured Adaptive Server to enable resource limits, you can use showplan to evaluate resources used prior to execution and statistics io to evaluate resources used during execution. For information on configuring Adaptive Server to enable resource limits.

 In addition to statistics io, statistics time is also useful for evaluating the resources a query consumes. Use statistics time to display the time it takes to execute each step of the query. For more information, see “Diagnostic Tools for Query Optimization” on page 12-6 in the Performance and Tuning Guide.

Choosing a limit type

After you determine the users and applications to limit, you have a choice of three different types of resource limits.

Table 6-1 describes the function and scope of each limit type and indicates the tools that help determine whether a particular query might benefit from this type of limit. In some cases, it may be appropriate to create more than one type of limit for a given user and application.

Resource limit types

Limit type

Use for queries that

Measuring resource usage

Scope

Enforced during

io_cost

Require many logical and physical reads.

Use set showplan on before running the query, to display its estimated I/O cost; use set statistics io on to observe the actual I/O cost.

Query

Pre-execution or execution

row_count

Return large result sets.

Use the @@rowcount global variable to help develop appropriate limits for row count.

Query

Execution

elapsed_time

Take a long time to complete, either because of their own complexity or because of external factors such as server load or waiting for a lock.

Use set statistics time on before running the query, to display elapsed time in milliseconds.

Query batch or transaction

Execution

tempdb_space

Use all space in tempdb when creating work or temporary tables.

Number of pages used in tempdb per session.

Query batch or transaction

Execution

The spt_limit_types system table stores information about each limit type.

Determining time of enforcement

Time of enforcement is the phase of query processing during which Adaptive Server applies a given resource limit. Resource limits occur during:

  • Pre-execution – Adaptive Server applies resource limits prior to execution, based on the optimizer’s I/O cost estimate. This limit prevents execution of potentially expensive queries. I/O cost is the only resource type that can be limited at pre-execution time.

When evaluating the I/O cost of data manipulation language (DML) statements within the clauses of a conditional statement, Adaptive Server considers each DML statement individually. It evaluates all statements, even though only one clause will actually be executed.

A pre-execution time resource limit can have only a query limit scope; that is, the values of the resources being limited at compile time are computed and monitored on a query-by-query basis only.

Adaptive Server does not enforce pre-execution time resource limits statements in a trigger.

  • Execution – Adaptive Server applies resource limits at runtime, and is usually used to prevent a query from monopolizing server and operating system resources. Execution time limits may use more resources (additional CPU time as well as I/O) than pre-execution time limits.

Determining the scope of resource limits

The scope parameter specifies the duration of a limit in Transact-SQL statements. The possible limit scopes are query, query batch, and transaction:

  • Query – Adaptive Server applies resource limits to any single Transact-SQL statement that accesses the server; for example, select, insert, and update. When you issue these statements within a query batch, Adaptive Server evaluates them individually.

Adaptive Server considers a stored procedure to be a series of DML statements. It evaluates the resource limit of each statement within the stored procedure. If a stored procedure executes another stored procedure, Adaptive Server evaluates each DML statement within the nested stored procedure at the inner nesting level.

Adaptive Server checks pre-execution time resource limits with a query scope, one nesting level at a time. As Adaptive Server enters each nesting level, it checks the active resource limits against the estimated resource usage of each DML statement prior to executing any of the statements at that nesting level. A resource limit violation occurs if the estimated resource usage of any DML query at that nesting level exceeds the limit value of an active resource limit. Adaptive Server takes the action that is bound to the violated resource limit.

Adaptive Server checks execution time resource limits with a query scope against the cumulative resource usage of each DML query. A limit violation occurs when the resource usage of a query exceeds the limit value of an active execution time resource limit. Again, Adaptive Server takes the action that is bound to that resource limit.

  • Query batch – query batch consists of one or more Transact-SQL statements; for example, in isql, a group of queries becomes a query batch when executed by a single go command terminator.

The query batch begins at nesting level 0; each call to a stored procedure increments the nesting level by 1 (up to the maximum nesting level). Each return from a stored procedure decrements the nesting level by 1.

Only execution time resource limits can have a query batch scope.

Adaptive Server checks execution time resource limits with a query batch scope against the cumulative resource usage of the statements in each query batch. A limit violation occurs when the resource usage of the query batch exceeds the limit value of an active execution time resource limit. Adaptive Server takes the action that is bound to that resource limit.

  • Transaction – Adaptive Server applies limits with a transaction scope to all nesting levels during the transaction against the cumulative resource usage for the transaction.

A limit violation occurs when the resource usage of the transaction exceeds the limit value of an active execution time resource limit. Adaptive Server takes the action that is bound to that resource limit.

Only execution time resource limits can have a transaction scope.

Adaptive Server does not recognize nested transactions when applying resource limits. A resource limit on a transaction begins when @@trancount is set to 1 and ends when @@trancount is set to 0.

Understanding limit types

There are three types of resource limits that allow you to limit resource usage in different ways.

Limiting I/O cost

I/O cost is based on the number of logical and physical accesses (“reads”) used during query processing. To determine the most efficient processing plan prior to execution, the Adaptive Server optimizer uses both logical and physical resources to compute an estimated I/O cost.

Adaptive Server uses the result of the optimizer’s costing formula as a “unitless” number; that is, a value not necessarily based on a single unit of measurement (such as seconds or milliseconds).

To set resource limits, you must understand how those limits translate into runtime system overhead. For example, you must know the effect that a query with a cost of x logical and of y physical I/Os has on a production server.

Limiting io_cost can control I/O intensive queries, including queries that return a large result set. However, if you run a simple query that returns all the rows of a large table, and you do not have current statistics on the table’s size, the optimizer may not estimate that the query will exceed the io_cost resource limit. To prevent queries from returning large result sets, create a resource limit on row_count.

The tracking of I/O cost limits may be less precise for partitioned tables than for unpartitioned tables when Adaptive Server is configured for parallel query processing. For more information on using resource limits in parallel queries, see the Performance and Tuning Guide.

Identifying I/O costs

To develop appropriate limits for I/O cost, determine the number of logical and physical reads required for some typical queries. Use the following set commands:

  • set showplan on displays the optimizer’s cost estimate. Use this information to set pre-execution time resource limits. A pre-execution time resource limit violation occurs when the optimizer’s I/O cost estimate for a query exceeds the limit value. Such limits prevent the execution of potentially expensive queries.
  • set statistics io on displays the number of actual logical and physical reads required. Use this information to set execution time resource limits. An execution time resource limit violation occurs when the actual I/O cost for a query exceeds the limit value.

Statistics for actual I/O cost include access costs only for user tables and worktables involved in the query. Adaptive Server may use other tables internally; for example, it accesses sysmessages to print out statistics. Therefore, there may be instances when a query exceeds its actual I/O cost limit, even though the statistics indicate otherwise.

In costing a query, the optimizer assumes that every page needed will require a physical I/O for the first access and will be found in the cache for repeated accesses. Actual I/O costs may differ from the optimizer’s estimated costs, for several reasons.

The estimated cost will be higher than the actual cost if some pages are already in the cache or if the statistics are incorrect. The estimated cost may be lower than the actual cost if the optimizer chooses 16K I/O, and some of the pages are in 2K cache pools, which requires many 2K I/Os. Also, if a big join forces the cache to flush its pages back to disk, repeated access may require repeated physical I/Os.

The optimizer’s estimates will not be accurate if the distribution or density statistics are out of date or cannot be used.

Calculating the I/O cost of a cursor

The cost estimate for processing a cursor is calculated at declare cursor time for all cursors except execute cursors, which is calculated when the cursor opens.

Pre-execution time resource limits on I/O cost are enforced at open cursorname time for all cursor types. The optimizer recalculates the limit value each time the user attempts to open the cursor.

An execution time resource limit applies to the cumulative I/O cost of a cursor from the time the cursor opens to the time it closes. The optimizer recalculates the I/O limit each time a cursor opens.

For a discussion of cursors, see Chapter 17, “Cursors: Accessing Data Row by Row,” in the Transact-SQL User’s Guide.

The scope of the io_cost limit type

A resource limit that restricts I/O cost applies only to single queries. If you issue several statements in a query batch, Adaptive Server evaluates the I/O usage for each query.

Limiting elapsed time

Elapsed time is the number of seconds, in wall-clock time, required to execute a query batch or transaction. Elapsed time is determined by such factors as query complexity, server load, and waiting for locks.

To help develop appropriate limits for elapsed time use information you have gathered with set statistics time You can limit the elapsed time resource only at execution time.

With set statistics time set on, run some typical queries to determine processing time in milliseconds. Convert milliseconds to seconds when you create the resource limit.

Elapsed time resource limits are applied to all SQL statements in the limit’s scope (query batch or transaction), not just to the DML statements. A resource limit violation occurs when the elapsed time for the appropriate scope exceeds the limit value.

Because elapsed time is limited only at execution time, an individual query will continue to run, even if its elapsed time exceeds the limit. If there are multiple statements in a batch, an elapsed time limit takes effect after a statement violates the limit and before the next statement is executed. If there is only one statement in a batch, setting an elapsed time limit has no effect.

Separate elapsed time limits are not applied to nested stored procedures or transactions. In other words, if one transaction is nested within another, the elapsed time limit applies to the outer transaction, which encompasses the elapsed time of the inner transaction. Therefore, if you are counting the wall-clock running time of a transaction, that running time includes all nested transactions.

The scope of the elapsed_time limit type

The scope of a resource limit that restricts elapsed time is either a query batch or transaction. You cannot restrict the elapsed time of a single query.

Limiting the size of the result set

The row_count limit type limits the number of rows returned to the user. A limit violation occurs when the number of rows returned by a select statement exceeds the limit value.

If the resource limit issues a warning as its action, and a query exceeds the row limit, the full number of rows are returned, followed by a warning that indicates the limit value; for example:

Row count exceeded limit of 50.

If the resource limit’s action aborts the query batch or transaction or kills the session, and a query exceeds the row limit, only the limited number of rows are returned and the query batch, transaction, or session aborts. Adaptive Server displays a message like the following:

Row count exceeded limit of 50.
Transaction has been aborted.

The row_count limit type applies to all select statements at execution time. You cannot limit an estimated number of rows returned at pre-execution time.

Determining row count limits

Use the @@rowcount global variable to help develop appropriate limits for row count. Selecting this variable after running a typical query can tell you how many rows the query returned.

Applying row count limits to a cursor

A row count limit applies to the cumulative number of rows that are returned through a cursor from the time the cursor opens to the time it closes. The optimizer recalculates the row_count limit each time a cursor opens.

The scope of the row_count limit type

A resource limit that restricts row count applies only to single queries, not to cumulative rows returned by a query batch or transaction. For more information, see “Determining the scope of resource limits”.

Setting limits for tempdb space usage

The tempdb_space resource limit restricts the number of pages a tempdb database can have during a single session. If a user exceeds the specified limit, the session can be terminated or the batch or transaction aborted.

For queries executed in parallel, the tempdb_space resource limit is distributed equally among the parallel threads. For example, if the tempdb_space resource limit is set at 1500 pages and a user executes the following with three-way parallelism, each parallel thread can create a maximum of 500 pages in tempdb:

select into #temptable from partitioned_table

The SA or DBA sets the tempdb_space limit using sp_add_resource_limit, and drops the tempdb_space limit using sp_drop_resource_limit.

Creating a resource limit

Create a new resource limit with sp_add_resource_limit. The syntax is:

sp_add_resource_limit name, appname, rangename, limittype, limit_value, enforced, action, scope

Use this system procedure’s parameters to:

  • Specify the name of the user or application to which the resource limit applies.

You must specify either a name or an appname or both. If you specify a user, the name must exist in the syslogins table. Specify “null” to create a limit that applies to all users or all applications.

  • Specify the type of limit (io_cost, row_count, or elapsed_time), and set an appropriate value for the limit type.

For more information, see “Choosing a limit type”.

  • Specify whether the resource limit is enforced prior to or during query execution.

Specify numeric values for this parameter. Pre-execution time resource limits, which are specified as 1, are valid only for the io_cost limit. Execution time resource limits, which are specified as 2, are valid for all three limit types. For more information, see “Determining time of enforcement”.

  • Specify the action to be taken (issue a warning, abort the query batch, abort the transaction, or kill the session).

Specify numeric values for this parameter.

  • Specify the scope (query, query batch, or transaction).

Specify numeric values for this parameter. For more information, see “Determining the scope of resource limits”.

For detailed information, see sp_add_resource_limit in the Reference Manual.

Resource limit examples

This section includes three examples of setting resource limits.

Examples

Example 1

This example creates a resource limit that applies to all users of the payroll application because the name parameter is NULL:

sp_add_resource_limit NULL, payroll, tu_wed_7_10, elapsed_time, 120, 2, 1, 2

The limit is valid during the tu_wed_7_10 time range. The limit type, elapsed_time, is set to a value of 120 seconds. Because elapsed_time is enforced only at execution time, the enforced parameter is set to 2. The action parameter is set to 1, which issues a warning. The limit’s scope is set to 2, query batch, by the last parameter. Therefore, when the elapsed time of the query batch takes more than 120 seconds to execute, Adaptive Server issues a warning.

Example 2

This example creates a resource limit that applies to all ad hoc queries and applications run by “joe_user” during the saturday_night time range:

sp_add_resource_limit joe_user, NULL, saturday_night, row_count, 5000, 2, 3, 1

If a query (scope = 1) returns more than 5000 rows, Adaptive Server aborts the transaction (action = 3). This resource limit is enforced at execution time (enforced = 2).

Example 3

This example also creates a resource limit that applies to all ad hoc queries and applications run by “joe_user”:

sp_add_resource_limit joe_user, NULL, "at all times", io_cost, 650, 1, 3, 1

However, this resource limit specifies the default time range, “at all times.” When the optimizer estimates that the io_cost of the query (scope = 1) would exceed the specified value of 650, Adaptive Server aborts the transaction (action = 3). This resource limit is enforced at pre-execution time (enforced = 1).

Getting information on existing limits

Use sp_help_resource_limit to get information about existing resource limits.

Users who do not have the System Administrator role can use sp_help_resource_limit to list their own resource limits (only).

Users either specify their own login names as a parameter or specify the name parameter as “null.” The following examples return all resource limits for user “joe_user” when executed by joe_user:

sp_help_resource_limit

or

sp_help_resource_limit joe_user

System Administrators can use sp_help_resource_limit to get the following information:

  • All limits as stored in sysresourcelimits (all parameters NULL); for example:
  • All limits for a given login (name is not NULL, all other parameters are NULL); for example:
  • All limits for a given application (appname is not NULL; all other parameters are NULL); for example:
  • All limits in effect at a given time or day (either limittime or limitday is not NULL; all other parameters NULL); for example:
  • Limit, if any, in effect at a given time for a given login (name is not NULL, either limittime or limitday is not NULL); for example:
sp_help_resource_limit
sp_help_resource_limit joe_user
sp_help_resource_limit NULL, payroll
sp_help_resource_limit @limitday = wednesday
sp_help_resource_limit joe_user, NULL, NULL, wednesday

For detailed information, see sp_help_resource_limit in the Reference Manual.

Example of listing all existing resource limits

When you use sp_help_resource_limit without any parameters, Adaptive Server lists all resource limits within the server. For example:

sp_help_resource_limit
name     appname rangename rangeid limitid limitvalue enforced  action scope
----     ------- --------- ------- ------- ---------- --------  ------ -----
NULL     acctng  evenings        4       2       120         2       1     2
stein    NULL    weekends        1       3      5000         2       1     1
joe_user acctng  bus_hours       5       3      2500         2       2     1
joe_user finance bus_hours       5       2       160         2       1     6
wong     NULL    mornings        2       3      2000         2       1     1
wong     acctng  bus_hours       5       1        75         1       3     1

In the output, the rangeid column prints the value from systimeranges.id that corresponds to the name in the rangename column. The limitvalue column reports the value set by sp_add_resource_limit or sp_modify_resource_limit. Table 6-2 shows the meaning of the values in the limitid, enforced, action, and scope columns.

Values for sp_help_resource_limit output

Column

Meaning

Value

Limited

What kind of limit is it?

1- I/O cost

2 – Elapsed time

3 – Row count

enforced

When is the limit enforced?

1 – Before execution

2 – During execution

3 – Both

Action

What action is taken when the limit is hit?

1- Issue a warning

2 – Abort the query batch

3 – Abort the transaction

4 – Kill the session

scope

What is the scope of the limit?

1 – Query

2 – Query batch

4 – Transaction

6 – Query batch + transaction

If a System Administrator specifies a login name when executing sp_help_resource_limit, Adaptive Server lists all resource limits for that login. The output displays not only resource limits specific to the named user, but all resource limits that pertain to all users of specified applications, because the named user is included among all users.

For example, the following output shows all resource limits that apply to “joe_user”. Because a resource limit is defined for all users of the acctng application, this limit is included in the output.

sp_help_resource_limit joe_user
name     appname rangename rangeid limitid limitvalue enforced  action scope
----     ------- --------- ------- ------- ---------- --------  ------ -----
NULL     acctng  evenings        4       2       120         2       1     2
joe_user acctng  bus_hours       5       3      2500         2       2     1
joe_user finance bus_hours       5       2       160         2       1     6

Modifying resource limits

Use sp_modify_resource_limit to specify a new limit value or a new action to take when the limit is exceeded or both. You cannot change the login or application to which a limit applies or specify a new time range, limit type, enforcement time, or scope.

The syntax of sp_modify_resource_limit is:

sp_modify_resource_limit name, appname, rangename, limittype, limitvalue, enforced, action, scope

To modify a resource limit, specify the following values:

  • You must specify a non-null value for either name or appname.
    • To modify a limit that applies to all users of a particular application, specify a name of “null.”
    • To modify a limit that applies to all applications used by name, specify an appname of “null.”
    • To modify a limit that governs a particular application, specify the application name that the client program passes to the Adaptive Server in the login packet.
  • You must specify non-null values for rangename and limittype. If necessary to uniquely identify the limit, specify non-null values for action and scope.
  • Specifying “null” for limitvalue or action indicates that its value does not change.

For detailed information, see sp_modify_resource_limit in the Reference Manual.

Examples of modifying a resource limit

sp_modify_resource_limit NULL, payroll, tu_wed_7_10, elapsed_time, 90, null, null, 2

This example changes the value of the resource limit that restricts elapsed time to all users of the payroll application during the tu_wed_7_10 time range. The limit value for elapsed time decreases to 90 seconds (from 120 seconds). The values for time of execution, action taken, and scope remain unchanged.

sp_modify_resource_limit joe_user, NULL, saturday_night, row_count, NULL, NULL, 2, NULL

This example changes the action taken by the resource limit that restricts the row count of all ad hoc queries and applications run by “joe_user” during the saturday_night time range. The previous value for action was 3, which aborts the transaction when a query exceeds the specified row count. The new value is to 2, which aborts the query batch. The values for limit type, time of execution, and scope remain unchanged.

Dropping resource limits

Use sp_drop_resource_limit to drop a resource limit from an Adaptive Server.

The syntax is:

sp_drop_resource_limit {name , appname } [, rangename, limittype, enforced, action, scope]

Specify enough information to uniquely identify the limit. You must specify a non-null value for either name or appname. In addition, specify values according to those shown in Table 6-3.

Identifying resource limits to drop

Parameter

Value specified

Consequence

name

  • Specified login

Drops limits that apply to the particular login.

  • NULL

Drops limits that apply to all users of a particular application.

appname

  • Specified application

Drops limits that apply to a particular application.

  • NULL

Drops limits that apply to all applications used by the specified login.

timerange

  • An existing time range stored in the systimeranges system table

Drops limits that apply to a particular time range.

  • NULL

Drops all resource limits for the specified name, appname, limittype, enforcement time, action, and scope, without regard to rangename.

limittype

  • One of the three limit types: row_count, elapsed_time, io_cost

Drops limits that apply to a particular limit type.

  • NULL

Drops all resource limits for the specified name, appname, timerange, action, and scope, without regard to limittype.

enforced

  • One of the enforcement times: pre-execution or execution

Drops the limits that apply to the specified enforcement time.

  • NULL

Drops all resource limits for the specified name, appname, limittype, timerange, action, and scope, without regard to enforcement time.

action

  • One of the four action types: issue warning, abort query batch, abort transaction, kill session

Drops the limits that apply to a particular action type.

  • NULL

Drops all resource limits for the specified name, appname, timerange, limittype, enforcement time, and scope, without regard to action.

scope

  • One of the scope types: query, query batch, transaction

Drops the limits that apply to a particular scope.

  • NULL

Drops all resource limits for the specified name, appname, timerange, limittype, enforcement time, and action, without regard to scope.

When you use sp_droplogin to drop an Adaptive Server login, all resource limits associated with that login are also dropped.

For detailed information, see sp_drop_resource_limit in the Reference Manual.

Examples of dropping a resource limit

Example 1

Drops all resource limits for all users of the payroll application during the tu_wed_7_10 time range:

sp_drop_resource_limit NULL, payroll, tu_wed_7_10, elapsed_time

Example 2

Is similar to the preceding example, but drops only the resource limit that governs elapsed time for all users of the payroll application during the tu_wed_7_10 time range:

sp_drop_resource_limit NULL, payroll, tu_wed_7_10

Example 3

Drops all resource limits for “joe_user” from the payroll application:

sp_drop_resource_limit joe_user, payroll

Resource limit precedence

Adaptive Server provides precedence rules for time ranges and resource limits.

Time ranges

For each login session during the currently active time ranges, only one limit can be active for each distinct combination of limit type, enforcement time, and scope. The precedence rules for determining the active limit are as follows:

  • If no limit is defined for the login ID for either the “at all times” range or the currently active time ranges, there is no active limit.
  • If limits are defined for the login for both the “at all times” and time-specific ranges, then the limit for the time-specific range takes precedence.

Resource limits

Since either the user’s login name or the application name, or both, are used to identify a resource limit, Adaptive Server observes a predefined search precedence while scanning the sysresourcelimits table for applicable limits for a login session. The following table describes the precedence of matching ordered pairs of login name and application name:

Level

Login name

Application name

1

joe_user

payroll

2

NULL

payroll

3

joe_user

NULL

If one or more matches are found for a given precedence level, no further levels are searched. This prevents conflicts regarding similar limits for different login/application combinations.

If no match is found at any level, no limit is imposed on the session.

 

  1. 1.     SQL Debugger

 

Pros: Very useful feature to debug the stored procedures.  Programmers can save substantial amount of time during the development and maintenance.

 

Cons: Though it offers the functionality, the interface is 1000 times below the Microsoft standard

( ie the one that is available for VB)

 

Benefit to: Programmers

 

  1. 2.     Compressed Backups

 

Pros: We can save lot of tapes with this feature.

 

Cons: We need to evaluate the compression ratio and the time taken to restore when needed.

 

Benefit to: Finance Dept

 

  1. 3.     Dynamic Server options

 

Pros: Whenever typical parameters like Memory/ Cache etc. are required to be tuned. Sybase needs a restart for the new values to come in to effect. During the daytime it is just not possible for DBAs to do this without users getting affected. So they stay back in the night, do the configuration and wait till the next day for the results so on

 

Cons: None

 

Benefit to: DBAS / users

 

  1. 4.     Improved Query Plans & Optimized Temporary table management / per user restriction

 

 

Pros: The queries / stored procs run faster because an optimized plan is used internally. The queries / stored procs run faster because the temporary table management is done effectively internally. DBAS can also place a restriction on the space usage so that the higher priority tasks can be run first vice versa.

 

Cons: None

 

Benefit to: Programmers , DBAs and  Users

 

  1. 5.     Wide column support / optimized Image data type

 

Pros: Though the image data type is available in version 12, the same has been improved as claimed by Sybase now. So we can look at integrating images in the prod databases

 

Cons: The performance needs to be evaluated by taking a smaller fund like BOB etc.,

 

Benefit to: DBAs and Users

 

  1. 6.     Quiesce

 

Pros: This is a very very useful facility to maintain 24 /7 availability and to minimize the DBAs time for Management of servers. When used properly, it can also take care of Report/ Data entry server requirement effectively. It can also be used for replication of the databases

 

Cons: To be evaluated

 

Benefit to: Everyone

 

  1. 7.     Security enhancements

 

Pros: We can now put a number of logins restrictions, which is a long felt requirement from Sybase. When used effectively, it can satisfy many security requirements from the database side.

 

Cons: None

 

Benefit to: People facing audits!

 

 

  1. DBA Assistant for Installation, Perf. tuning and Remote server management

 

Pros: Useful help for DBAs in the respective events

 

Cons: None

 

Benefit to: DBAs

 

 

  1. File access support

 

Pros: This can give a paradigm shift to the way we look at accessing the data from the database. The document says that programmers can treat files like word/excel etc., also as tables. This can be a great feature for giving integrated solutions like STP etc.,

 

Cons: None

 

Benefit to: Programmers, DBAs

 

  1. Open Switch for DR

 

Pros: This is an advanced version of replication server especially to take care of DR operations

 

Cons: To be evaluated further to see the H/W dependencies

 

Benefit to: All

 

  1. XML support

 

Pros: This is a feature directly supporting XML standards. If not anything else, this will be a used to set a standard in the industry and helps us in positioning ourselves as a techno-organization. The doc. Also says there is an XML based SQL called XQL.

 

Cons: To be evaluated further

 

Benefit to: org.

Create index to avoid table scan

Using Clustered index on a table to avoid hot- spots.

Create data row lock tables

sp_chgattribute titles, “fillfactor”, 50

use order by clause based on available index key on the table

With the exp_row_size parameter, in a create table statement to avoid row forwarding on all data-only locking tables.

Run update statistics on the selected tables

Sketch the where clause based on the index available on the table

Compile stored procedures after significant changes to the underlying tables were performed.

Optimizer will take decision based on the columns selected ,indexes, statistics available on the table

Specify the Where clause tables join order based on from clause table order. (Join Transitive

 

The SQL92 standard defines four levels of isolation for transactions. Each isolation level specifies the kinds of actions that are not permitted while concurrent transactions are executing. Higher levels include the restrictions imposed by the lower levels:

 

·Level 0 – ensures that data written by one transaction represents the actual data. It prevents other transactions from changing data that has already been modified (through an insert, delete, update, and so on) by an uncommitted transaction. The other transactions are blocked from modifying that data until the transaction commits. However, other transactions can still read the uncommitted data, which results in dirty reads.

 

Actual data modified by tran1 and tran2 able to read the data (dirty read)

But not permits for any updation.

 

 

·Level 1 – prevents dirty reads. Such reads occur when one transaction modifies a row, and a second transaction reads that row before the first transaction commits the change. If the first transaction rolls back the change, the information read by the second transaction becomes invalid. This is the default isolation level supported by Adaptive Server.

 

 

·Level 2 – prevents nonrepeatable reads. Such reads occur when one transaction reads a row and a second transaction modifies that row. If the second transaction commits its change, subsequent reads by the first transaction yield different results than the original read.

Adaptive Server supports this level for data-only-locked tables. It is not supported for allpages-locked tables.

 

·Level 3 – ensures that data read by one transaction is valid until the end of that transaction, hence preventing phantoms. Adaptive Server supports this level through the holdlock keyword of the select statement, which applies a read-lock on the specified data. Phantoms occur when one transaction reads a set of rows that satisfy a search condition, and then a second transaction modifies the data (through an insert, delete, update, and so on). If the first transaction repeats the read with the same search conditions, it obtains a different set of rows.

 

You can set the isolation level for your session by using the transaction isolation level option of the set command. You can enforce the isolation level for just a query as opposed to using the at isolation clause of the select statement. For example:

 

set transaction isolation level 0

 

Default Isolation Levels for Adaptive Server and SQL92

By default, the Adaptive Server transaction isolation level is 1. The SQL92 standard requires that level 3 be the default isolation for all transactions. This prevents dirty reads, nonrepeatable reads, and phantoms. To enforce this default level of isolation, Transact-SQL provides the transaction isolation level 3 option of the set statement. This option instructs Adaptive Server to apply a holdlock to all select operations in a transaction. For example:

set transaction isolation level 3

 

Applications that use transaction isolation level 3 should set that isolation level at the beginning of each session. However, setting transaction isolation level 3 causes Adaptive Server to hold any read locks for the duration of the transaction. If you also use the chained transaction mode, that isolation level remains in effect for any data retrieval or modification statement that implicitly begins a transaction. In both cases, this can lead to concurrency problems for some applications, since more locks may be held for longer periods of time.

 

To return your session to the Adaptive Server default isolation level:

set transaction isolation level 1

Dirty Reads

 

Applications that are not impacted by dirty reads may have better concurrency and reduced deadlocks when accessing the same data by setting transaction isolation level 0 at the beginning of each session. An example is an application that finds the momentary average balance for all savings accounts stored in a table. Since it requires only a snapshot of the current average balance, which probably changes frequently in an active table, the application should query the table using isolation level 0. Other applications that require data consistency, such as deposits and withdrawals to specific accounts in the table, should avoid using isolation level 0.

Scans at isolation level 0 do not acquire any read locks for their scans, so they do not block other transactions from writing to the same data, and vice versa. However, even if you set your isolation level to 0, utilities (like dbcc) and data modification statements (like update) still acquire read locks for their scans, because they must maintain the database integrity by ensuring that the correct data has been read before modifying it.

Because scans at isolation level 0 do not acquire any read locks, it is possible that the result set of a level 0 scan may change while the scan is in progress. If the scan position is lost due to changes in the underlying table, a unique index is required to restart the scan. In the absence of a unique index, the scan may be aborted.

By default, a unique index is required for a level 0 scan on a table that does not reside in a read-only database. You can override this requirement by forcing Adaptive Server to choose a nonunique index or a table scan, as follows:

select * from table_name (index table_name)

Activity on the underlying table may abort the scan before completion.

Repeatable Reads

A transaction performing repeatable reads locks all rows or pages read during the transaction. After one query in the transaction has read rows, no other transaction can update or delete the rows until the repeatable reads transaction completes. However, repeatable-reads transactions do not provide phantom protection by performing range locking, as serializable transactions do. Other transactions can insert values that can be read by the repeatable-reads transaction and can update rows so that they match the search

criteria of the repeatable-reads transaction.

A transaction performing repeatable reads locks all rows or pages read during the transaction. After one query in the transaction has read rows, no other transaction can update or delete the rows until the repeatable reads transaction completes. However, repeatable-reads transactions do not provide phantom protection by performing range locking, as serializable transactions do. Other transactions can insert values that can be read by the repeatable-reads transaction and can update rows so that they match the search criteria of the repeatable-reads transaction.

 

 

0 – Allows Dirty Reads

1 – Disallows Dirty Reads ( Default)


 

 

 

 

SYBASE PART 1

Tags

Question & Answers at the time of Installation

 

93. Why I can’t connect to my SQL Server?

A. There are a variety of reasons for connection failure. Here are some things to check :

  

   (a) Make sure that the DSLISTEN parameter in “sybenv.dat” is set to the correct SQL server name.

   (b) Make sure that SQL server is up by entering the following command from the console: display servers

   (c) Check the error log to make sure that SQL server is advertising on the correct port. Remember to use the “cperrlog” utility to make a copy of the error log for viewing.

   (d) Make sure that the server name entries in “sybenv.dat” and the interfaces file match. Server name are case-sensitive.

      

94. Why can’t I load isql?

A. Your interfaces file may contain inaccurate information or be improperly formatted.

   Check your interfaces file for the following:

   (a) Blank lines

   (b) Network numbers that do not match the one Netware is using.

Your interfaces file must contain the address that the Netware file server advertises, which is in “autoexec.ncf” file.

For SQL server running on an SPX network, the IPX internal number is most critical. For SQL server running on a TCP/IP network, the IP number is critical.

   (c) Always use “sybinst” in SQL server 4.2.2 and “sybinit” in

       SQL server 10.0.x or 11.0.x to change the interfaces file.

      

95. What do I do when my “disk init” command fails with an error message about lack of space?

A. Configure your Netware environment to ensure that deleted file are actually purged from the system, allowing the space used by those files to be returned to the system. You can do this immediately by typing the “purge/all” command from the root directory of a client machine. To set up your environment to perform automatic, immediate purges, put   the following lines in your “sys:systemautoexec.ncf” file:

         

           set immediate purge of deleted files = on

 

96. When existing from “isql” NLM, from other utilities, or from SQL Server

   using a “quit” command, this message appears:

How can I have this message automatically answered? I want to unload utilities or the SQL server automatically, but as it is, an operator must be present to press a key to close the screen.

A. Invoke “isql” using the -k flag in the command; for example:

            load isql -k

           

97. What can I do to prevent Netware SQL server from monopolizing my m/s CPU’s

A. Invoke the “sqlsrvr” NLM with “-P” flag                                                                   

   Consider using the -P option only when you are running complex queries

   that invoke large amounts of data and you have the following problems:

   (a) SQL server is dropping existing connections

   (b) The Netware Console is hanging

   (c) The Netware system clock is running slowly

   Here is the syntax:

        sqlsrvr -Pnumber

   The number parameter controls the frequency with which the SQL server

   relinquishes the CPU to other Netware processes. The lower the value

   of number, the more often SQL server will relinquish the CPU. The

   default value of number is 3000.

 

98. Suppose my “autoexec.ncf” contains incorrect syntax or invokes a

   mal-functioning executable. How do I suppress execution of the

   “autoexec.ncf” configuration file when starting up my Netware server?

A. Start your Netware server with the -na(no autoexec) option on the

   command line:      server -na

 

99. When I attempt to dump a database, Netware returns the following error:

       DFSExpandfile failed..

   What does this error message mean ?

A. There is insufficient space on the volume to which you are dumping. Free additional space on the volume by decreasing the time the Netware file server waits before it actually deleted a file. Enter the following commands from the console :

  

                                    set file delete wait time = 60 sec

                                    set minimum file delete wait time = 30 sec

 

100. How can I control whether or not SQL server creates a dynamic socket in

     situations in which the interfaces file is corrupted or missing?

A. The USE_DEFAULT_SPX parameter in “sybenv.dat” controls whether or not SQL server will create a dynamic socket.   If USE_DEFAULT_SPX is set to TRUE, SQL server will dynamically generate and allocate a network address using SAP (Netware Service Advertising Protocol) if no server information is found in the interfaces file. This    means that applications that except SQL server to listening on a particular socket may not be able to connect to the server. Applications must use the Netware bindery in order to connect to a dynamic socket.

  

101. Why do I see two network handler entries in my “SP_WHO” display?

A. Your SQL server supports both SPX and TCP protocols, and there are two entries in your interfaces file.

 

102. Can my Backup server and SQL server share a port number?

A. No. “sybinit” shows a default port number for a Backup server that is the same as that for SQL server.

   Do not accept the default; enter the appropriate port number(in hexadecimal)

 

   PROTOCOL        SQL Server Port           Backserver Port

   ================================================================

    SPX             0x83db                     0x83be

    TCP             0x1000                     0x1001

   ================================================================

 

103. Why is there a slight delay when my SQL server boots on a large system?

A. SQL server must allocate and then deallocate memory. For example, depending on the platform, a SQL server with 48MB of memory may take up to one minute to initialize.

 

104. Does SQL server directly support Netware Version 4.01 Directory Services?

A. No if you using Netware 4.01, install Netware Version 4.01, directory services with the bindery Emulation mode.

  

105. Can I reload SQL server 11.0.x into the existing installation directory?

A. No. You must first remove the existing installation directory or load in another location. Otherwise, the load will fail because there are read-only files in the directory that cannot be updated.

 

123. Log is full, you want to add a device in sysdevices what will you do?

            first unsuspend the log i.e.

           

            select lct_admin(“unsuspend”,database_id)

           

            disk init

            sp_addsegment logsegment, database_name, device_name

            dump tran database with no_log

            dump tran database with truncate only

            alter the database

 

124. What are the uses of segments?

SEGMENTS: IMPROVE PERFORMANCE BOTH READ AND WRITE LARGE TABLE 

READ AT A TIME TEXT AND IMAGE DATA IMPROVE PERFORMANCE WHEN THE TABLE IS HEAVILY USED.  MANAGE THE SIZE OF OBJECTS WITHIN THE DATABASE

retrevals fast,

sp_placeobject segment_name, object_name

log and datasegments on two different disks

 

125. shutdown with no wait what will happens

A.  it disables the logins, does not issue checkpoint, and it terminates current transactions.

 

137.difference between truncate & delete

            truncate :- truncate table deletes the values and it not recover,

            make no entry in syslogs  and faster than delete

            delete :-   delete table it deletes all and it is recoverable,

            using rollback statement make entry in syslogs  and slow also

           

138. what is intent lock?

intent locks indicate the intention to acquire a shared or exclusive lock on a data page. Intent locks are used to prevent other transactions from acquiring shared or exclusive locks on the given page it is a table lock.

 

139. how to call a remote procedure

exec remote_servername.database_name.owner.stored procedure

 

140.why do you take database backup after dump tran database_name with no_log

it clears log’s,so the recovery can’t be made upto date without database backup

 

142. what are segments?

segments are a subset of database devices maped to particular database

system,logsegment,default segments

 

143. what is fill factor?

fillfactor specifies how full to make each page when you created a new index.

 

144. How to set Sybase_Ts_Role?

sp_role “grant”,Sybase_Ts_Role, sa

go

set role “Sybase_Ts_Role” on

go

 

145. What information you will get with sp_spaceused?

A145.  we get the information of master database i.e

               database_name,database_size,reserved,data,index_size,unused space.

 

146. Why not index every column in a table?

A146.The most significant reason is that building an index takes time and storage space.  A second reason is that inserting, deleting or updating data into indexed columns takes a longer time than for un-indexed columns.    

SYBASE IMPORTANT QUESTIONS

Tags

Q1: How do you load SQLSERVER in single user mode ?

A1: By giving the command  “load sqlsrvr -m” at the server console.

 

Q2: How do you drop a corrupted database ?

A1: Use the command “dbcc dbrepair(database_name,dropdb)”.

 

Q3: What are the various database options ?

A3: the various options available in Sybase System 10.0 are:

      1) select into/bulkcopy.

      2) ddl in tran.

      3) allow nulls by default.

      4) read only.

      5) single user.

      6) dbo use only.

      7) abort tran on log full.

      8) truncate log on checkpoint.

      9) no checkpoint on recovery.

     10) auto identity.

     11) no freespace accounting.

     

Q4: Whenever the space in a particular segment becomes full, a message must be displayed informing

A4: Create a threshold on a segment and write appropriate code in the procedure of that threshold.

   

Q5: What does a checkpoint do?

A5: A checkpoint does the following :           

       1) Commits all the transactions ie Writes all the committed transactions

          physically on the disk.

       2) Makes an entry in the syslogs so that the recovery becomes easier.

      

Q6: Can one create views on temporary tables?

A6: No. We cannot create views on temporary tables. 

 

Q7: What are the differences between Batch and Procedure?   

A7: “batch” is one or more transact sql statement terminated by end-of-batch signals.

 

 “Procedures” are collection of sql statements and control of flow language Procedures: faster performance, reduced network traffic, better control for sensitive updates, and modular programming.

 

uses :- call other procedures;

                        execute remote sql server;

                        ability to write the power, efficiency and flexibility of sql

Q8: How do you lock a database?

A8: By setting the database option “dbo use only”.

 

Q9: Can you do bulk copy on temp tables?

A9: No. Bulk copy cannot be performed on temporary tables.

 

Q10: What is the default group offered by sybase?

A10: “public” is the default group.

 

Q11: To how many groups can a user belong to?

A11: One only.

 

Q12: How do you detect a deadlock?

A12: Sql server displays a message when a deadlock occurs. The message number is 1204. One victim is selected and his process is rolled back. This user must submit the

     process again.

 

Q13: How do you execute a batch of commands in one statement ?

A13: By creating a procedure which embeds all the statements,

     and calling the procedure.          

 

Q14: What will you do to allow nulls in a table without specifying the same in a create table statement?

A14: set the db_option “allow nulls by default”, true.

 

Q15: Can you dump a database/transaction to an operating system file ?

A15: Yes. Dump database/transaction database_name to “physical_path”

 

Q16: What are the different roles available in sybase ?

A16: There are six roles in sybase. They are:    

       1) sa role.

       2) sso role.

       3) oper role.

       4) sybase_ts role.

       5) replication role.

       6) navigation role.

      

Q17: How is sa_role different from sso_role?

A17: A person having sso_role can do the following:

      1) create/delete logins.

      2) change/set  the passwords of the users.

      3) manage remote logins.

      4) grant sso_role to other users.

    

     A person having the sa_role in sybase is considered as the super user of the system. He can do everything except the ones listed above.

         

Q18: What is the difference between a primary key and a unique key ?         

A18: A column defined as a primary key does not allow null values. But   a column defined as a unique key allows one null value. Primary key  by default creates a clustered index whereas a unique key creates a  non clustered index.

    

Q19: What are dirty reads?

A19:  A ‘dirty read’ occurs when one transaction modifies a row and then a  second transaction reads that row before the first transaction  commits the change. If the first transaction rolls back the change,  the information read by the second transaction becomes invalid is called dirty reads.

  

Q20: How do I insert duplicate rows in a unique index?         

A20: You can insert duplicate rows in a unique index by specifying the option “allow duplicate rows ” in the create index statement.

    

Q21: Which index you prefer for a table that has lot of updations and insertions?

A21: A non-clustered index. Because the clustered index rewrites the whole data in the table based on updations and insertions.

   

Q22: What are segments? What are the uses of segments?

A segment is a named sub-set of a database device. It is basically used in fine tuning or optimizing the performance of the server. Placing a table on one segment and its non-clustered index on another segment causes the reads and writes to be faster. Similarly placing a database on one segment and its log on separate segment ensures that there is no disk contention during physical reads/writes and logging.

    

Q23: What are procedures and what are the uses of procedures?

A23: They are system procedures and stored procedures. Uses of procedures are

                        # Take parameters

                        # Call other procedures

                        # Return a status value to a calling procedure or batch

                          to indicate success or failure, and the reason for failure.

                        # Return values of parameters to a calling procedure or batch

                        # be executed on remote sql Servers                                                              

 

Q24: What is the difference between executing a set of statements in a batch and executing the statements in a procedure?     

A24: batch takes more time than procedure, because it writes in syslogs.

 

Q25: What will you do if one out of five users complaints that his system is working slow?

A25: BATCH file PROCESS GOING ON

 

Q26: A transaction T1 is defined in a procedure X. X calls another procedure Y.

     Can one issue “rollback T1″ statement, in procedure Y?        

A26: Yes.

 

Q27: What is the difference between truncate and delete?    

A27: Truncate:-  truncate table deletes the values and it not recover, make no entry in syslogs  and faster than delete

             Delete:- delete table it deletes all and it is recoverable, useing rollback statement make entry in syslogs  and slow also

 

Q28: Can one use DDL commands in a transaction ?

A28: Yes, by setting the database option “ddl in tran”,true.

 

Q29: What are the restrictions on updating a table through views ?

A29: An update operation to any column in the view, is not allowed

 

Q30: What do you do after issuing the sp_configure command ?

A30:  Issue Reconfigure with override. 

 

Q31: What is Intent lock?

A31: intent locks indicate the intention to acquire a shared or exclusive lock on a data page. Intent locks are used to prevent other transactions from acquiring shared or exclusive locks on the given page it is a table lock.

 

Q32: How do I call a remote procedure?

A32:Execute remote_server_namecall.database_name.object_owner_name.procedure_name

 

Q33: Why do you dump database after issuing dump tran with no_log statement?

A33: Dump tran database_namewith no_log clears the log without dumping it. Therefore

     complete recovery becomes impossible if the database fails or gets corrupted.

     To have a copy of all changes made, we should dump database.

 

Q34: What databases are created during installation?

A34: Four databases are created during installation. They are

      a) master b) model c) tempdb d) sybsystemprocs.

      options is pubs2, sybsyntax databases.

     

Q35: How to display the current users role?

A35: By executing sp_displaylogins user_name or select show_roles().     

             sp_displaylogin (login_name)

             select show_roles(user_name) by default current user_name

 

36.  When a record is deleted, what happens to the remaining records on that page?

A36. No physical movement of data occurs at the time of record is deleted. The record is tagged for future physical deletion.

 

37.  When does sybase reduce the amount of space allocated to an object when a large number of records have been deleted?

A37. When no more records reside on an extent, the extent is returned to the pool for use by other objects. There are several ways to accomplish this. One way is to drop the clustered index and re-create it. Another way is to bcp out, truncate table (deallocate extents) and then bcp in the data(allocating only enough extents to hold the data)

 

38. Explain what happens when clustered index is created?

    * Physically sort the data

    * Sufficient amount of space (approximately 1.2 times to actual data)

      is required for the sorting process.

 

39. What happens when non-clustered index was created?

A leaf level is created by copying the specified index columns. The leaf level is sorted and uses pointers to the associated data pages.

 

40. When you install Sybase SQL Server what other server needs to be installed?

    The Backup Server

 

41. Why would we define a fill factor when creating an index?

 

42. How do we increase the size of database?

A42. Alter database

 

43. What utility does sybase use to import large volumes of data?

A43. BCP

 

44. When fast bcp is used to load data, what effect does it have on the transaction log?

A44. When fast BCP is used syb does not log any transactions. instead logs the pages that are written in case of failure.

 

45.  What is stored in syslogs?

A45. the transaction log

 

46. Would frequent transaction log dumps be used for an application classified as decision support or on line transaction processing?

A46. On line transaction processing

 

47. What happens when we try to create unique index on a column that contains duplicate values?

 

48. Does Syb allows null values in a column with unique index?

        Yes. one null value, use not null constraint while creating table

 

49. when creating a nonunique clustered index, why would we use the

     ‘ignore_dup_rows’ option?

A49. We can complete update the process purpose.

    

50. What does the ‘update statistics’ do?

A50. It updates the all transactions, page allocations.

 

51. Describe some scenarios that would cause the transaction log to fill up.

        * transaction log not dumped often enough

        * when a single insert, update or delete affects large data

        * when a transaction remains open for long time

 

52. What are the DBCC commands ?

A52. dbcc checktable table_name { checks a specific table’s consistency}

     dbcc checkdb db_name  { checks all tables for a database}

     dbcc checkcatalog db_name  { checks system tables}

     dbcc checkalloc db_name      { checks page allocations}

     dbcc tablealloc   table_name  { checks table allocation pointers}

     dbcc indexalloc db_name       { checks index page pointers}

     dbcc fix_alloc  db_name          { fixes allocation pages reported by checkalloc}

     dbcc dbrepair (database_name, dropdb)   {drop a corrupt database}

 

53. what is the command “dbcc dbrepair” used for?

A53. If the database corrupted, then we can repair through dbcc dbrepair.

 

54. why should we separate transaction log and database on to separate physical  devices?

A54.Improve Performance Both Read And Write Large Table Read At A Time Text And Image Data Improve Performance When The         Tableis Heavily Used.  Manage The Size Of Objects Within The Database     Retrevals Fast.

 

55. when would you use ‘dump tran inward with no_log’?

A55. If the transactions log is full.

 

56.  If an old transaction remains open and is causing the log to fill up, what should you do?

A56. kill that opened process

 

57. what is the role of sysusages table in MASTER DATABASE?

57A: The creation of a new database is recorded in the master database tables      sysdatabases and sysusages.

 

58. what is the system procedure created by the user that monitors the space

    usage on the segments and dumps the log when the last-chance thresold is

    reached?

A58. sp_thresholdaction

 

59. what is the last-chance threshold?

 

60. what is the recovery interval?  is an estimate of the time required by sql server to recover in case of system failure.  *go detail in manual

 

61. what does ‘truncate log on checkpoint’ do?

 

62. what happens (internally) when you try to insert a row into a table with clustered index and the data page is full?   page split occurs

      

63. how are rows added to a table that does not have a clustered index? added to the bottom of the table.

 

64. how do we recover the master database after database becomes corrupt?

        * Replace the generic master database “- buildmaster -m”

        * Start the SQL Server in single user mode “- startserver -m”

        * load the most recent dump of master

        * Restart the SQL Server in single user mode

        * Check sysusages, sysdevices and sysdatabases against a recent backup copy.

        * run dbcc checkalloc and dbcc checkdb on all databases

        * dump the master database.

 

65. How to change configure the values?

65A:sp_configure

 

66. If syslogs of a database was full what are the steps taken?

66A: (a)         dump the particular database

                         dump database_name with no_log

                        dump database_name with truncate_only

            (b)       alter database

 

67. What are the difference between clustered and non_clustered indexes?

67A: Clustered Indexes dictate the physical order of data. The leaf level of the clustered index is the data. A Non_Clustered index has a row in the leaf level of the index for every row in the table.

             

68. What does update statement do?

 

69. What are the constraints in sybase?

 

70. What does dump database does and dump tran does?

 

71. Will a file containing rows that have negative values for column b be added during a bulk-copy?

71A: Yes, rules, triggers and constraints are not recognized during bulk-copy operation.             

72. What command you use to change the default value in column b to 5?

72A: Alter table table_name replace b default 5

 

73. what system table contains objects, rules as tables, defaults and triggers within a database?

73A: Sysobjects.

           

74. How many pages are allocated when a table is created?

74A: An extent. which is 8 pages.

 

75. What is difference between varchar and char?

75A:  char is a fixed length data type with training spaces.

      varchar is a variable length data type.

 

76. How many no of triggers can be created on a table?

76A: 3 i.e.  insert,update,delete

77. what is normalization?   difference between normalization and denormalization?

 

77A: Normalization produces smaller tables with smaller rows.

            More rows per page(less logical I/O)

            More rows per I/O(more efficient)

            More rows fit in cash (less physical I/o)

           

Searching, sorting and creating indexes are faster, since tables are       narrower, and more rows fit on a data page.

 

You usually wind up with more tables. You can have more clustered indexes(you get only one per table) so you get more flexibility in tuning queries.

 

Index searching is often faster, since indexes tend to be narrower and shorter.

 

More tables allow better use of segments to control physical placement of data.

 

You usually wind up with fewer indexes per table, so data modification commands are faster.

 

You wind up with fewer null values and less redundant data, making your database more compact.

Triggers execute more quickly if you are not maintaining redundant data.

 

Data modification anomalies are reduced.

 

Normalization is conceptually cleaner and easier to maintain and change as you needs change.

           

While fully normalized databases require more joins, joins are generally very fast if indexes are available on the join columns.  SQL server is optimized to keep higher levels of the index in cache, so each join performs only one or two physical I/Os for each matching row.  The cost of finding rows already in the data cache is extremely low.

           

First Normal Form, Second Normal Form, Third Normal Form.

 

78. What types of relationships in sybase10?

78A: Relations become tables, Attributes become columns, Relationships become data references (primary and foreign key references).

     

79. What is data integrity types?

79A:  entity and referential

 

80. How to update row by row from the database?

80A: THROUGH CURSORS

 

81. What are the system tables in sybase10?

81A:  sybsystemprocs

 

82. What is difference between sybase 4.2 and sybase10?

82A: cursors, sybsystemprocs,

 

83. How many types of locks? explain?

83A: Holdlock, noholdlock, or shared

            Page locks: Shared Locks, Exclusive Locks, Update Locks

            Table locks: Intent lock, shared lock, exclusive lock

            Demand locks: Sql Server sets a demand lock to indicate that a transaction is next in    

            line to lock a table or page.

           

84. Difference between oracle and sybase?

84A: Oracle is GUI, where as sybase does not have a GUI concept

      

85. How to create data type?

85A:sp_addtype

 

86. What type of locking will sybase follow: Row level, Column Level?

86A: Row level

 

87. What is the diff between Implicit and Non-Implicit cursors?.

87A: Implicit cursors are system created cursors; where as Non-implicit cursors [explicit cursors] are user created cursors.

 

88. What are sub-related queries, Diff type of Triggers (Automated Triggers)?.

 

89. What is NLM meant for – Novell Loadable Module?.

 

90. What are diff type of backups in Unix environment?.

 

91. What are diff between MS-SQL server and Sybase Server?.

 

92. How to execute a set of procedures at one time

92A: By calling the other procedure by EXEC command in the main procedure

 

92 (b) How to Audit System or Explain sso task only?

92 (b):  sp_auditoption

            sp_auditdatabase dbname

            sp_auditobject table_name

            sp_auditsproc proc_name

            sp_auditlogin login_name ”     “, on | off

            sp_auditrecord

            sp_configure “audit queue size” , #_audit_records

 

Follow

Get every new post delivered to your Inbox.