Connect with us

Reading temperature with RTD PT1000

Discussion in 'General Electronics Discussion' started by nickagian, Oct 6, 2010.

Scroll to continue with content
  1. nickagian

    nickagian

    2
    0
    Oct 6, 2010
    Hi all!

    Attached is a circuit I have designed for measuring temperature with P1K0.232.6W.A.010 RTD sensor, connected in the 4-wire configuration. The sensor is placed on a custom PCB, around 3m away from the rest of the circuit.

    U2, U3 and R3 produce a constant current source for the RTD. Theoretically, U2 produces 1.25V and the constant current is around 1.25/3.16k=~396uA. In reality, the current that I measure is 401uA.

    The voltage developed across RTD is first amplified with U5 (with a theoretical gain of [R_gain/100k] + 1 = 2, but in reality R_gain has the value of 99.5k and thus the gain is around 1.995) and then measured with a 12-bit ADC, integrated in the EM250 ZigBee SoC. The measured value is given to me directly in mV, with help of the API offered with EM250 from Ember.

    To convert the result from the ADC into Temperature I do the following process:

    1) The measured voltage is divided with 1.995 (the actual gain), to find the real voltage value across the RTD.
    2) Afterwards, I calculate the resistance value of RTD by dividing the above calculated voltage with the measuring current.
    3) To find the temperature, I use the equation of α: α=(RT-R0)/(R0*T)=0.00385

    The problem I face is that the measured temperature is quite different from the real. The error is around 6deg Celsius and I cannot find the reason for that.

    I took some more specific measurements and found the following: I measure the voltage directly at the two legs of the PT1000 (at the remote PCB) to be around 448mV, which means around 29 deg. C. On the other hand, the ADC measures 438mV (after dividing the true result with the gain of AD623), which means around 23 deg. C.

    Do you have any good idea on where this error comes from and how I could eliminate it? Is it possible that the this 10mV error is caused by the input bias currents of the AD623? It seems weird though, that such a current of 20nA can solely produce such a huge error.

    Is it possible that the input filter formed by R1, R2, C1, C2 causes any problem?

    I feel that probably this error is caused by resistors tolerances, but the point is that I have measured the true values of the components and made the computations with these real values (but still the problem is there).

    Can anyone please help me with this? Do you think that it is possible to go down to an accuracy of around +/- 1C with this circuit, because this was my initial goal. Is it possible to make the accuracy better by calibration? I don't have any previous experience on this though, I don't know how to do it.

    Thanks,
    Nikos
     

    Attached Files:

    Last edited: Oct 6, 2010
  2. (*steve*)

    (*steve*) ¡sǝpodᴉʇuɐ ǝɥʇ ɹɐǝɥd Moderator

    25,477
    2,820
    Jan 21, 2010
    You have 3 problems.

    The first is that you need to calibrate your instrument. Due to component tolerances things are going to vary. If you can't eliminate this variation some other way then you need to be able to trim out the errors. Also beware that temperature and time will cause other variations. Allowing slight adjustment of R3 and/or Rgain would be necessary. The problem is that R3 and Rgain essentially control the same thing.

    The second problem is that measuring some of these values changes them. Your multimeter doesn't have infinite impedance, so the values you read are not necessarily the value in the circuit.

    The third problem is that platinum sensors are not completely linear. You can be within 1 deg over a small range, but the larger the range, the more you'll be out somewhere.
     
  3. (*steve*)

    (*steve*) ¡sǝpodᴉʇuɐ ǝɥʇ ɹɐǝɥd Moderator

    25,477
    2,820
    Jan 21, 2010
  4. nickagian

    nickagian

    2
    0
    Oct 6, 2010
    Hi Steve,

    first of all thanks for your reply.

    1)Ok, this is my intention, too, to calibrate it. The problem is that I do not know what procedure to follow. Can you please tell me the basic procedure to follow? I guess that I have to expose the RTD in the range that I need to use it, take the real values measured by the instrument, compare them with the theoretical values that should be measured. And afterwards? Do I have to keep this array of measurements and do something like a digital interpolation for every measurement that I take? Is there any other way to calibrate it? The other solution that I have in mind is to adjust the resistors you mention in order to get the desired result, but this will only be valid for this specific temperature that will be carried out and create possible errors at other temperatures. I am sorry for my noob questions, but I have never done something like that before.

    2)For the second problem, I fully understand your point. I think that this will be solved only with the calibration procedure, because else I cannot be sure of the components' real values.

    3) Regarding the third point that you mention. This is something actually that I haven't considered during design and I think that in the second version of the circuit I may change to a more linearized solution (like the last link you gave me). Anyway, the temperature range of interest is the standard industrial range (-40 up to 80 deg C) and in this range I can see that the non-linearity of the RTD is quite small (around 0.5 deg C), so I guess this will not be a big deal for the time being.

    Thanks,
    Nikos
     
Ask a Question
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.
Electronics Point Logo
Continue to site
Quote of the day

-