fn_dblog is the way to look backwards as the other commentators have said.
Only allowing the users to modify the data through stored procedures is a great way to prevent this happening in the first place, as you can add logic to prevent users from mass modifying records they shouldn't, or ensuring that they have to provide correct values for updates. This may mean that you'd need to make application changes depending on how they're currently accessing the data. Which may or may not be possible for you.
If you can't do that, then a quick and simple way with SQL 2005 is going to be using a trigger to do some DML auditing at the table level. (SQL Server 2008 onwards have auditing tools built in).
A simple solution for your issue might be:
drop table audit_test
go
create table audit
(
uname varchar(50),
[date] datetime,
what nvarchar(4000),
host varchar(50),
)
go
create trigger ddlcheck on tbl_example
for update, delete
as
declare @tbltmp table(eventtype nvarchar(30),para smallint, strsql nvarchar(4000))
insert into @tbltmp exec ('dbcc inputbuffer('+@@spid+')')
insert into audit_test select SUSER_NAME(), GETDATE(), strsql , HOST_NAME() from @tbltmp
This will fire any time a query attempts to update the table, or delete from the table. It uses DBCC inputbuffer ( http://msdn.microsoft.com/en-us/library/ms187730(v=sql.90).aspx ) to get the issued command. This gives you a table populated with all the update and delete statements issued against a particular table. Plus it records who issued statement, and where and when it was.
Now this could be a lot of logging data on a busy table, so this could be restricted to just catch 'bad' queries (say those that update/delete 1000+ rows) by altering the trigger to:
create trigger ddlcheck on tbl_example
for update, delete
as
declare @cnt integer
select @cnt=count(1) from deleted
if @cnt>1000
begin
declare @tbltmp table(eventtype nvarchar(30),para smallint, strsql nvarchar(4000))
insert into @tbltmp exec ('dbcc inputbuffer('+@@spid+')')
insert into audit_test select SUSER_NAME(), GETDATE(), strsql , HOST_NAME() from @tbltmp
end
This logs less data, but the trigger still needs to 'evaluate' for every operation, so may have a performance impact which will need to be measured and tested. This will also miss the actions if the user submits lots of individual statements, ie;
won't catch:
delete from tbl_example where id=1
delete from tbl_example where id=2
.....
delete from tbl_exampe where id=1000
will catch
delete from tbl_example where id>0 and id
Hope this is of some help for the future.