Data quality in SQL Server // TRY_CONVERT for float and real done safely

Data quality with floating-point columns is a discipline of its own — and TRY_CONVERT(float, …) has a few quirks that tend to slip past the import path: an empty string becomes 0 (not NULL), a comma as decimal separator yields NULL, and even after a clean conversion 2 + 3.4 - 3.4 - 2 does not return exactly 0 as a float. This article sorts out the rules, shows the safe conversion pattern, and spells out the difference between decimal (precise) and float (approximating).

At a glance:

  • Two floating-point types with clearly tiered precision — real (= float(24), 4 bytes) and float (= float(53), 8 bytes).
  • When converting from text, TRY_CONVERT accepts only the period as decimal separator; a comma yields NULL regardless of locale, and scientific notation ('1.23E-3') is supported.
  • The empty string is the edge case: it returns 0, which is usually wrong semantically in an ETL context — a CASE/TRIM/REPLACE pattern cleanly maps it to NULL instead.
  • float is an approximating type — operations produce rounding errors on the order of 1E-16. For exact values (money, regulated reporting figures), reach for decimal.
  • In Postgres, CAST(... AS double precision) throws an exception on invalid input — a PL/pgSQL wrapper try_cast_double provides the NULL-instead-of-exception behavior. IEEE 754 is identical across platforms.

Prerequisite: SQL Server 2017+ (TRY_CONVERT has shipped since 2012; the safe pattern below uses TRIM, which requires SQL Server 2017 — on 2012–2016, use LTRIM(RTRIM(...)) instead). Examples run without a sample dataset — pure inline literals and DECLARE variables.

Content

Overview

SQL Server provides two data types for storing floating-point numbers: float(n) and real. Both are not precise data types: they store values in binary mantissa/exponent representation. That keeps large value ranges representable with moderate storage cost, but at the expense of accuracy. A float variable may appear to hold the value 0 while carrying a non-zero remainder in a less significant decimal place. Typical use cases: technical measurements, sensor data, scientific calculations — anywhere an accuracy of 7 to 15 significant digits is enough.

Microsoft Learn for TRY_CONVERT (and for CAST and CONVERT) states that the value to be converted can be any expression. That means both nvarchar strings from CSV/JSON/XML imports and typed numbers (e.g. decimal(18, 5) from a pipeline calculation) are valid inputs. This article distinguishes between the two cases — Text and Non-Text — and then presents a safe conversion pattern.

Precision and storage

Type choice comes before conversion. The two floating-point types differ in the number of bits reserved for the mantissa (the digits before the exponent) — and therefore in storage size and accuracy:

Data typeMantissa bitsBytesSignificant decimal digitsValue range (approx.)Typical use
real   (= float(24))244~7-3.4E+38..3.4E+38Sensor values, pixel coordinates, simple measurements
float  (= float(53))538~15-1.79E+308..1.79E+308Scientific calculations, high-precision measurements

The parameter n in float(n) accepts values from 1 to 53. SQL Server buckets the resolution into two ranges: for n from 1 to 24, 4 bytes are reserved (single precision, identical to real); for n from 25 to 53, 8 bytes are reserved (double precision, identical to float without a parameter). If n is omitted, SQL Server assumes 53In short: realfloat(24) and anything up to float(24) are binary-identical; float(25) through float(53) and float without a parameter are binary-identical.

This article focuses mostly on float without a parameter (i.e. float(53)); the differences against real concern precision and storage only, not the conversion rules.

Text

When a text value is to be converted to a float, the following scenarios are worth investigating:

  • The text contains a valid number in decimal notation ('123.456').
  • The text uses a comma as decimal separator ('123,456').
  • The text is an empty string or contains only whitespace.
  • The text uses scientific notation ('123456E-3').

The following calls to TRY_CONVERT cover those cases. The expected result is noted after each statement:

  1: SELECT TRY_CONVERT(float, NULL        ) -- NULL
  2: SELECT TRY_CONVERT(float, N'123'      ) -- 123
  3: SELECT TRY_CONVERT(float, N'123,456'  ) -- NULL
  4: SELECT TRY_CONVERT(float, N'123.456'  ) -- 123.456
  5: SELECT TRY_CONVERT(float, N''         ) -- 0
  6: SELECT TRY_CONVERT(float, N' '        ) -- 0
  7: SELECT TRY_CONVERT(float, N'  123.456') -- 123.456
  8: SELECT TRY_CONVERT(float, N'123.456  ') -- 123.456
  9: SELECT TRY_CONVERT(float, N'123456E-3') -- 123.456

Numbers in decimal notation with a period as separator are converted as expected; leading and trailing whitespace does not affect the result; scientific notation (mantissa E exponent) works the same way. TRY_CONVERT(float, …) is locale-independent — neither SET LANGUAGE nor SET DATEFORMAT affect the interpretation of a decimal separator. The conversion expects a period as decimal separator at all times, regardless of server or session settings; a comma in the text therefore yields NULL.

That already produces two surprises. Line 3 shows that a comma in the text path is not interpretable as a thousands separator nor as a German decimal separator — the call fails. Anyone importing from a CSV using German notation has to normalize commas to periods before calling TRY_CONVERT. Lines 5 and 6 show that an empty string, or a string with only spaces, is converted to the value 0. While the NULL result for a comma is intuitive, the 0 conversion of an empty string is usually semantically wrong in an ETL context: an empty CSV cell is an “unknown”, not a zero. The safe variant is documented further down as a pattern.

Non-Text

When the input is not text but already typed (an integer or a decimal), the scenarios shrink to:

  • The value passed is an integer or a typed decimal.
  • The value is written as a comma-style literal (123, 456) — and the parser reads it as two arguments.
  • The value is a number in scientific notation.

The first five tests:

  1: SELECT TRY_CONVERT(float, NULL      ) -- NULL
  2: SELECT TRY_CONVERT(float, 123       ) -- 123
  3: SELECT TRY_CONVERT(float, 123, 456  ) -- 123
  4: SELECT TRY_CONVERT(float, 123.456   ) -- 123.456
  5: SELECT TRY_CONVERT(float, 123456E-3 ) -- 123.456

Line 3 deserves special attention. The call looks as if it should convert '123,456' as a decimal — but the comma in T-SQL is not a decimal separator; it is the argument separator between function parameters. TRY_CONVERT is defined as a three-argument function ((data_type, expression, style)), and the parser reads the 456 as the third parameter (style). For TRY_CONVERT(float, …) the style parameter has no effect (style codes apply only to date and money conversions), and what remains is the first expression 123. In plain terms: TRY_CONVERT(float, 123, 456) returns 123, neither 123.456 nor NULL. Anyone writing typed numbers with a comma as a literal in code silently produces a wrong value — unlike the text path, where a comma cleanly yields NULL.

A typed decimal is otherwise transferred verbatim into the float value range (line 4). Scientific notation works the same way as in the text path (line 5).

Decimal vs. Float

In contrast to float and realdecimal is a precise data type: it stores values as a signed sequence of decimal digits with a fixed number of places (decimal(precision, scale)). The fractional digits are held exactly, and an arithmetic like 2 + 3.4 - 3.4 - 2 reliably yields 0float stores in a binary representation with a fixed mantissa length — many decimal numbers that look “clean” to a human (0.10.23.4) are not exactly representable in that binary form. The consequence: operations accumulate rounding errors at the resolution of the mantissa.

The same arithmetic, once with float, once with decimal:

  1: DECLARE @f1 AS float = 2;
  2: DECLARE @f2 AS float = 3.4;
  3: DECLARE @f3 AS float = @f1 + @f2;
  4: 
  5: SELECT @f3 - @f2 - @f1                  -- 4.44089209850063E-16
  6:                                         -- 0,0000000000000004440892...
  7: 
  8: DECLARE @d1 AS decimal(2, 1) = 2;
  9: DECLARE @d2 AS decimal(2, 1) = 3.4;
 10: DECLARE @d3 AS decimal(2, 1) = @d1 + @d2;
 11: 
 12: SELECT @d3 - @d2 - @d1                  -- 0.0

In the float path, the subtraction leaves a remainder of 4.44E-16 — a typical rounding error within the precision of float‘s 53-bit mantissa (binary64 format per IEEE 754). The specific value is not “the resolution limit” but the consequence of accumulated rounding steps across the three operations involved — the same arithmetic on the same SQL Server reliably produces exactly this remainder. With real (24-bit mantissa, binary32), the remainder would be correspondingly larger because fewer bits are available for fractional precision. In the decimal path the result is exactly 0.0, because no binary approximation takes place.

Practical consequence: For values with strict accuracy requirements (monetary amounts, regulated reporting figures, inventory counts), decimal is the default. float and real are intended for values whose meaning is measurement-tolerant (sensor readings, physical constants, calculations with a clearly defined measurement precision). A full treatment of decimal is given in TRY_CONVERT // Converting data to decimal or numeric.

Caveat: decimal is not unconditionally exact — when converting to a lower scale, SQL Server silently rounds, without an error or NULL. For instance, TRY_CONVERT(decimal(10, 0), 1234.5) returns 1235, neither NULL nor an error (see FAQ).

Safe type conversion

The two sections above have shown two edge cases that have to be handled explicitly in the import path:

  • An empty string is converted to 0. When that is semantically wrong (the default case in CSV imports), the empty string has to be mapped to NULL before the TRY_CONVERT.
  • comma as decimal separator yields NULL. When the source delivers German notation ('123,456'), the comma has to be replaced with a period before the TRY_CONVERT.

The following example handles both edge cases together:

  1: DECLARE @p_input AS nvarchar(30);
  2: SET @p_input = N'  123.45678';
  3: 
  4: SELECT TRY_CONVERT( float
  5:                   , REPLACE( CASE WHEN TRIM(@p_input) = ''
  6:                                 THEN NULL
  7:                                 ELSE @p_input
  8:                              END
  9:                             , ','
 10:                             , '.'
 11:                            )
 12:                   ) AS [Output];

If the pattern is needed across many columns in an ETL flow, abstract it into a user-defined function fn_try_convert_float(@p_input nvarchar) — see Design Pattern // Sichere Typ-Konvertierung mit T-SQL (in German).

Postgres bridge

In multi-engine ETL pipelines the question of a Postgres counterpart comes up. The good news first: IEEE 754 is identical across platforms. A number stored in SQL Server as float (binary64) has the exact same bit representation in Postgres as double precision — and the same rounding error. Cross-DB calculations produce bit-identical results given a correct cast.

The conversion behavior differs in two central points, though:

  • CAST(s AS double precision) is Postgres’ default conversion — and it throws an exception on invalid input. There is no direct TRY_CONVERT counterpart; the call either succeeds or aborts.
  • Even Postgres 18 (released 2025-09-25) does not ship a built-in counterpart — neither a try_cast function nor the SQL/JSON CAST ... ON ERROR NULL syntax from SQL:2023 made it into 18.0. You still write the NULL-instead-of-exception wrapper yourself as a PL/pgSQL function.

Wrapper function as a PL/pgSQL counterpart to TRY_CONVERT(float, …):

  1: CREATE OR REPLACE FUNCTION try_cast_double (p_input text)
  2:    RETURNS double precision
  3:    LANGUAGE plpgsql
  4:    IMMUTABLE
  5: AS $$
  6: BEGIN
  7:    IF p_input IS NULL OR TRIM(p_input) = '' THEN
  8:       RETURN NULL;
  9:    END IF;
 10:    RETURN p_input::double precision;
 11: EXCEPTION
 12:    WHEN invalid_text_representation OR numeric_value_out_of_range
 13:    THEN
 14:       RETURN NULL;
 15: END;
 16: $$;

With this, try_cast_double('1,234') behaves like TRY_CONVERT(float, '1,234'): the call returns NULL instead of raising. The IF clause additionally absorbs the empty-string edge case — Postgres would respond to ::double precision on an empty string with an exception as well, not (like SQL Server) convert it to 0. An analogous wrapper for real is defined as try_cast_real(p_input text) RETURNS real.

When the input contains formatted numbers with thousands or decimal separators (German or US notation), to_number with a format pattern is the right tool:

  1: SELECT to_number('1.234,56', 'FM999G999D99');  -- 1234.56  (deutsche Notation, lc_numeric = 'de_DE.UTF-8')
  2: SELECT to_number('1,234.56', 'FM999G999D99');  -- 1234.56  (US-Notation, lc_numeric = 'en_US.UTF-8')

to_number is locale-dependent via the session variable lc_numeric — unlike SQL Server’s TRY_CONVERT, which accepts only the period regardless of locale. In an ETL context it is worth setting lc_numeric explicitly per session, so the import is decoupled from the server’s default configuration. Note: to_number returns numeric (precise!); for double precision a final cast is required (to_number(s, 'FM999G999D99')::double precision).

Type mapping T-SQL ↔ Postgres:

SQL ServerPostgresIEEE 754 formatBytes
float  (= float(53))double precisionbinary648
float(24) and realrealbinary324
TRY_CONVERT(float, s)try_cast_double(s) (wrapper)
TRY_CONVERT(real, s)try_cast_real(s) (wrapper)
TRY_CONVERT(float, s) with separatorsto_number(s, 'FM999G999D99')::double precision

Summary

When converting a value to the float data type, three edge cases stand out: an empty string becomes the number 0, a comma as decimal separator yields NULL (regardless of locale), and a comma literal in code (123, 456) is read as a function parameter. The safe conversion pattern above catches the first two; the third is a matter of code discipline (always write typed numbers with a period in source code).

The difference between the precise (decimal) and approximating (floatreal) data types matters little for a pure value conversion. As soon as float values are involved in arithmetic — including aggregations inside an ETL pipeline — rounding errors can accumulate. In those situations, decimal is preferable to float.

Take-Away:

  • Type choice by required accuracy: real (binary32, ~7 significant digits) for sensor values and pixel coordinates, float (binary64, ~15 significant digits) for scientific calculations — decimal for anything with strict exactness requirements (money, quantities, regulated reporting figures).
  • TRY_CONVERT(float, '') returns 0, not NULL — map empty strings to NULL via CASE/TRIM before the conversion.
  • TRY_CONVERT(float, '1,234') returns NULL (locale-independent, period only as decimal separator) — normalize commas to periods via REPLACE(',', '.') before the conversion.
  • Postgres counterpart: double precision (= float(53), binary64) and real (= float(24), binary32); no built-in try_cast in Postgres 18.0, write your own wrapper. to_number returns numeric and is locale-dependent via lc_numeric.

FAQ

Why does TRY_CONVERT(float, '1,234') return NULL?

TRY_CONVERT(float, ...) is locale-independent and accepts only the period as decimal separator — neither SET LANGUAGE nor SET DATEFORMAT affect the interpretation. The moment a comma appears in the string, the conversion fails. If the source delivers German notation ('1.234,56'), normalize via REPLACE first: REPLACE(REPLACE(@p_input, '.', ''), ',', '.') turns '1.234,56' into '1234.56', which can then be converted with TRY_CONVERT(float, …). For the form without thousands separators, REPLACE(@p_input, ',', '.') is enough (as in the safe pattern above).

float or real — which type when?

The rule of thumb is the source’s accuracy requirement. real (= float(24), binary32, 4 bytes) is enough for values with ~7 significant decimal digits — sensor measurements, pixel coordinates, simple geometric calculations. float (= float(53), binary64, 8 bytes) carries ~15 significant digits and is the default for scientific calculations, high-resolution measurements, and anything where aggregations across many values would accumulate rounding errors. Storage efficiency only pays off in tables with billions of rows; in standard schemas the difference between 4 and 8 bytes per row is negligible.

Why does 0.1 + 0.2 != 0.3 in SQL Server (and in every other IEEE-754-compliant language)?

0.10.2 and 0.3 are not exactly representable in a binary mantissa/exponent form — analogous to 1/3 in decimal, which can only be written as an infinite 0.333... sequence, 0.1 and 0.2 only have a periodic bit pattern in binary64 and are rounded to 53 mantissa bits when stored. 0.1 + 0.2 as float therefore yields 0.30000000000000004 — the leftover stems from the rounding remainders of both summands. decimal(2, 1) stores the values decimal-exactly and returns 0.3. The same behavior applies in Postgres, Python, JavaScript and C — IEEE 754 is a cross-platform standard.

When decimal instead of float?

Three rules of thumb:

  1. Money and quantities. Amounts in dollars/cents, stock counts, inventory values belong in decimal(p, s) with s ≥ 2.
  2. Regulated reporting figures. Values that appear in financial statements, tax filings or regulatory reports have to be reproducibly decimal-exact — float produces different cent-level remainders depending on the order of operations.
  3. Aggregation-heavy data. As soon as a value is aggregated across thousands of rows (sums or averages over large fact tables), float rounding errors accumulate. decimal stays exact.

Caveat: decimal is not unconditionally exact — when converting to a lower scale, SQL Server silently rounds, without an error or NULLTRY_CONVERT(decimal(10, 0), 1234.5) returns 1235, neither NULL nor an error. There is no out-of-the-box mode that signals the loss of decimal places as NULL; only SET NUMERIC_ROUNDABORT ON turns the silent rounding step into an error (default: OFF). For a full treatment of decimal/numeric, see TRY_CONVERT // Converting data to decimal or numeric.

Postgres counterpart to TRY_CONVERT(float, …)?

There is no direct counterpart — CAST(s AS double precision) throws an exception instead of returning NULL, and Postgres 18 (released 2025-09-25) did not add a built-in try_cast or the SQL/JSON CAST ... ON ERROR NULL syntax either. Solution (as of 18.0): a PL/pgSQL wrapper try_cast_double(p_input text) RETURNS double precision with EXCEPTION WHEN invalid_text_representation OR numeric_value_out_of_range THEN RETURN NULL (see Postgres bridge). Because IEEE 754 is identical across platforms, try_cast_double and TRY_CONVERT(float, …) produce bit-identical results for the same input — rounding errors included. For formatted numbers with thousands or decimal separators, to_number(s, format) is the Postgres path, but locale-dependent via lc_numeric.

Related Posts