Welcome aboard the good ship SSIS. I'll be your captain this evening... Not everyone's first SSIS package is going to be as interesting as yours so some of the things I'm going to cover are techniques I only worry about when things are in the 20% of the 80/20 rule.
SSIS package timings
When you run a package in Visual Studio/SSDT, it runs under the context of the debugger. That's how we get the pretty UI, progress displays and the ability to set breakpoints. That's not free and I think an old rule of thumb was it could cause up to a 20% drag on processing. The only true way to get processing time is to run it in a controlled environment outside of Visual Studio.
But that gets you into a licensing issue. Developing SSIS packages is free. But you cannot run an SSIS package outside of SSDT without a licensed installation of SQL Server on your machine. Developer edition is stupid cheap. Standard and Enterprise editions, not so much. If you where to Ctrl-F5, it would run the SSIS package outside of the debugger and be some degree faster but in your case, it'd likely error out specifying that "to run {task name} outside of Visual Studio, you must install the component". What that really means is you need to the SQL Server Integration Services Service installed.
Once you can run an SSIS package outside of the context of the debugger, then you can get a better set of performance metrics. The easiest place to do this is the SSISDB in SQL Server. There are pretty reports that let you get down to component level timing as long as the package is run in the Performance logging mode (or you're on a 2017?+ and can define your own custom logging level)
There are older ways of doing this, via the Logging but the current UI obscures it away. If the next sections don't provide illumination, ping me and I can flesh out the old ways.
Quick and dirty performance thoughts
Based on what you've described, it sounds like your Script Component is going to be the source of your poor performance. I don't know if you've gotten through all 330M rows in your source file. If the package runs in a reasonable amount of time, then I'd say keep using the whole file as your testbed. Personally, I'd take the first N lines of it to speed up your time to failure.
All the components you described are synchronous components. 1 row in = 1 row out. Those are your fastest dataflow components as there are no memory copies happening and the activities can get parallelized.
Well, maybe it's all synchronous - a script component by default acts in a synchronous fashion but you can change the settings to make it asynchronous.
The quick way to tell whether it's the OLE DB Destination(s) is to delete them from your package. Version control is going to be your friend here - make a dumb testing change, roll back to good state. Try the next change and repeat
If normal execution is 10 minutes, skipping sending the rows to the destination database and whatever actions it must take to save the data are not a factor in the timing. I've pushed large datasets into SQL Server, a well tuned destination (both SQL Server and the SSIS componentry) can make it sing. A poorly tuned destination is gonna cry with your data volume.
My first run of your package would be Flat File Source -> Conditional Split => Row Count Good; Row Count Bad. Run that and see what happens. This is going to help you identify a theoretical best performance profile. 330M rows takes 1 minute per million to just read off disk, then you're looking at what, just shy of 6 hours? Where's that source file located? On your local disk or on a server share somewhere? You might get better performance copying from a network share to a local drive as a precursor step and then reading from disk. And then you get to factor in your workstation, hopefully, has slower disks than the actual server this will run on.
The actual timings are less important that just knowing what the best your can expect from your package is compared to the subsequent things we do. Add your error destination back in and run it. How did that affect performance.
Remove the error destination and add your script back in. Actually, what does "updates the package variables for duplicate detection". If the source data has a literal \ followed by a N, you can skip your Script task and use a Derived Column transformation to do the same thing. Assume you have CustomerName column
([CustomerName]=="\\N" ? NULL(DT_WSTR, 100)) : [CustomerName]
You can either define a new column CustomerName_Clean or replace the existing. I usually favor adding a new column because if I have to troubleshoot, I like to see what I started with alongside what I ended up with.
Destination
OLE DB Destination still shows the assorted options so I'm not sure where you're seeing only the one.
This is a place where you will need to change the default values. That Maximum Insert Commit Size - the default is that it will do an all or nothing commit of that 330M rows and that's gonna cause pain for the database. Pick a number, say 100k and start with that. Work with your DBA to see what is happening from their end as you start loading your target table. They may want to proactively grow their log before it starts ingesting your feed.
If you have non-clustered indexes on that target table, remove them before you load. Growth of 100% or more and a single NCI, you're usually better off dropping and recreating them versus paying to keep the base table/heap and an additional index in sync during the data loads. If you have multiple NCIs on a table, the rule of thumb was 10% growth was the tipping point (SQLCAT articles no longer in existence or I'd reference them here)
Data Types
Double click the connector between your Flat File Source and your first component, the Conditional Split, and look at the Metadata tab. Do any of those columns report a TEXT named data type DT_TEXT, DT_NTEXT? If so, that's gonna be part of your headache. Large object types can get paged to disk when they pass through a data flow so now you're paying for surprise reads and writes, on your C drive unless you specified otherwise, along with the source reads and writes on wherever the database lives.