For your purpose @Daniel's simple query should do the trick:
select repeat('A', (3 * 2^20)::int); -- "3 MB"
"Size" depends. There is the size on disk, with or without overhead, with or without padding, with or without compression, with or without indexes / toast table / free space map ...
There is also the size of RAM needed to store the same data (typically more than on disk), the size of the text representation when sent over the wire, the compressed size, with or without tcp/ip overhead ...
You can use the function pg_column_size() to get an approximation.
Retrieve just enough rows from any table that's big enough:
SELECT *
FROM any_big_tbl
WHERE big_id <= (
SELECT big_id
FROM (
SELECT big_id, sum(pg_column_size(b)) OVER (ORDER BY big_id) AS size
FROM any_big_tbl b
) sub
WHERE size > (3 * 2^20)
ORDER BY size
LIMIT 1
);
big_id being the pk of any_big_tbl.
This computes a running sum of the number of bytes used to store the rows to find the id of the row reaching the requested size. The outer query returns all rows up to that id.
Or, to measure the size of the text representation, including parentheses for the row and escape characters where needed (typically results in fewer rows):
SELECT *
FROM any_big_tbl
WHERE big_id <= (
SELECT big_id
FROM (
SELECT big_id, sum(pg_column_size(b::text)) OVER (ORDER BY big_id) AS size
FROM any_big_tbl b
) sub
WHERE size > (3 * 2^20) -- "3 MB"
ORDER BY size
LIMIT 1
);
More about measuring "size":
Measure the size of a PostgreSQL table row
select repeat('A',3000000)will give you 3 million bytes to fetch, if that's what you're after. – Daniel Vérité Jul 03 '14 at 12:13